This Image of a White Barack Obama Is AI’s Racial Bias Problem In a Nutshell

A pixelated image of Barack Obama upsampled to the image of a white man has sparked another discussion on racial bias in artificial intelligence and machine learning. Read More

#bias, #gans

Democratizing artificial intelligence is a double-edged sword

When company leaders talk about democratizing artificial intelligence (AI), it’s easy to imagine what they have in mind. The more people with access to the raw materials of the knowledge, tools, and data required to build an AI system, the more innovations are bound to emerge. Efficiency improves and engagement increases. Faced with a shortage of technical talent? Microsoft, Amazon, and Google have all released premade drag-and-drop or no-code AI tools that allow people to integrate AI into applications without needing to know how to build machine learning (ML) models. Read More

#bias, #explainability

The two-year fight to stop Amazon from selling face recognition to the police

In the summer of 2018, nearly 70 civil rights and research organizations wrote a letter to Jeff Bezos demanding that Amazon stop providing face recognition technology to governments. As part of an increased focus on the role that tech companies were playing in enabling the US government’s tracking and deportation of immigrants, it called on Amazon to “stand up for civil rights and civil liberties.” “As advertised,” it said, “Rekognition is a powerful surveillance system readily available to violate rights and target communities of color.”

Along with the letter, the American Civil Liberties Union (ACLU) of Washington delivered over 150,000 petition signatures as well as another letter from the company’s own shareholders expressing similar demands. A few days later, Amazon’s employees echoed the concerns in an internal memo.

Despite the mounting pressure, Amazon continued with business as usual. Read More

#bias, #explainability, #image-recognition

Shortcut Learning in Deep Neural Networks

Deep learning has triggered the current rise of artificial intelligence and is the work horse of today’s machine intelligence. Numerous success stories have rapidly spread all over science, industry and society, but its limitations have only recently come into focus. In this perspective we seek to distil how many of deep learning’s problem can be seen as different symptoms of the same underlying problem:shortcut learning. Shortcuts are decision rules that perform well on standard benchmarks but fail to transfer to more challenging testing conditions, such as real-world scenarios. Related issues are known in Comparative Psychology, Education and Linguistics, suggesting that shortcut learning may be a common characteristic of learning systems, biological and artificial alike. Based on these observations, we develop a set of recommendations for model interpretation and benchmarking,highlighting recent advances in machine learning to improve robustness and transfer ability from the lab to real-world applications. Read More

#bias, #deep-learning

Bias-Resilient Neural Network

Presence of bias and confounding effects is in arguably one of the most critical challenges in machine learning applications that has alluded to pivotal debates in the recent years. Such challenges range from spurious associations of confounding variables in medical studies to the bias of race in gender or face recognition systems. One solution is to enhance datasets and organize them such that they do not reflect biases, which is a cumbersome and intensive task. The alternative is to make use of available data and build models considering these biases. Traditional statistical methods apply straightforward techniques such as residualization or stratification to precomputed features to account for confounding variables. However, these techniques are generally not suitable for end-to-end deep learning methods. In this paper, we propose a method based on the adversarial training strategy to learn discriminative features unbiased and invariant to the confounder(s). This is enabled by incorporating a new adversarial loss function that encourages a vanished correlation between the bias and learned features. We apply our method to synthetic data, medical images, and a gender classification (Gender Shades Pilot Parliaments Benchmark) dataset. Our results show that the learned features by our method not only result in superior prediction performance but also are uncorrelated with the bias or confounder variables. The code is available at http://github.com/QingyuZhao/BR-Net/. Read More

#bias

Machine learning ethics: what you need to know and what you can do

Ethics is, without a doubt, one of the most important topics to emerge in machine learning and artificial intelligence over the last year. While the reasons for this are complex, it nevertheless underlines that the area has reached technological maturity. After all, if artificial intelligence systems weren’t having a real, demonstrable impact on wider society, why would anyone be worried about its ethical implications?

It’s easy to dismiss the debate around machine learning and artificial intelligence as abstract and irrelevant to engineers’ and developers’ immediate practical concerns. However this is wrong. Ethics needs to be seen as an important practical consideration for anyone using and building machine learning systems. Read More

#bias, #ethics

Scientists developed a new AI framework to prevent machines from misbehaving

They promised us the robots wouldn’t attack…

In what seems like dialogue lifted straight from the pages of a post-apocalyptic science fiction novel, researchers from the University of Massachusetts Amherst and Stanford claim they’ve developed an algorithmic framework that guarantees AI won’t misbehave. Read More

#bias

Why fair artificial intelligence might need bias

Businesses across industries are racing to integrate artificial intelligence (AI). Use cases are proliferating, from detecting fraud, increasing sales, improving customer experience, automating routine tasks, to providing predictive analytics.

With machine learning models relying on algorithms learning patterns from vast pools of data, however, models are at risk of perpetuating bias present in the information they are fed. In this sense, AI’s mimicking of real-world, human decisions is both a strength and a great weakness for the technology— it’s only as ‘good’ as the information it accesses. Read More

#bias, #ethics

Steve Wozniak Shares Perspectives On Technology, AI and Innovation

In an exclusive interview and in a presentation at the Novathon conference, the Apple co-founder discusses his love for technology, his fears about artificial intelligence, and his perspectives on the potential for digital transformation.

While optimistic about the future, Steve Wozniak is not ready to turn over his identity (nor his Tesla) to artificial intelligence anytime soon. At a conference in Budapest I attended, he referenced deleting his Facebook account because of privacy concerns, and that he no longer believes that a totally autonomous car will happen in his lifetime. But Wozniak retains the passion and enthusiasm for technology and innovation that made him a household name as Apple’s co-founder. Read More

#artificial-intelligence, #bias

Artificial Intelligence Can Be Biased. Here’s What You Should Know.

Artificial intelligence has already started to shape our lives in ubiquitous and occasionally invisible ways. In its new documentary, In The Age of AI, FRONTLINE examines the promise and peril this technology. AI systems are being deployed by hiring managerscourts, law enforcement, and hospitals — sometimes without the knowledge of the people being screened. And while these systems were initially lauded for being more objective than humans, it’s fast becoming clear that the algorithms harbor bias, too. Read More

#bias