A new paper from the University of California and Google Research has found that a small number of ‘benchmark’ machine learning datasets, largely from influential western institutions, and frequently from government organizations, are increasingly dominating the AI research sector.
The researchers conclude that this tendency to ‘default’ to highly popular open source datasets, such as ImageNet, brings up a number of practical, ethical and even political causes for concern.
Among their findings – based on core data from the Facebook-led community project Papers With Code (PWC) – the authors contend that ‘widely-used datasets are introduced by only a handful of elite institutions’, and that this ‘consolidation’ has increased to 80% in recent years. Read More
The Paper
Tag Archives: Bias
Missing the Point
When AI manipulates free speech, censorship is not the solution. Better code is.
Every issue is easy — if you just ignore the facts. And Glenn Greenwald has now given us a beautiful example of this eternal, and increasingly vital, truth.
In his Substack, Glenn attacks the Facebook whistleblower (he doesn’t call her that; he calls her a quote-whistleblower-unquote), Frances Haugen, for being an unwitting dupe of the Vast Leftwing Conspiracy that is now focused so intently on censoring free speech. To criticize what Facebook has done, in Glenn’s simple world, is to endorse the repeal of the First Amendment. To regulate Facebook is to start us down the road, if not to serfdom, then certainly to a Substack-less world.
But all this looks so simple to Glenn, because he’s so good at ignoring how technology matters — to everything, and especially to modern media. Glenn doesn’t do technology. Read More
Examining algorithmic amplification of political content on Twitter
As we shared earlier this year, we believe it’s critical to study the effects of machine learning (ML) on the public conversation and share our findings publicly. This effort is part of our ongoing work to look at algorithms across a range of topics. We recently shared the findings of our analysis of bias in our image cropping algorithm and how they informed changes in our product.
Today, we’re publishing learnings from another study: an in-depth analysis of whether our recommendation algorithms amplify political content. The first part of the study examines Tweets from elected officials* in seven countries (Canada, France, Germany, Japan, Spain, the United Kingdom, and the United States). Since Tweets from elected officials cover just a small portion of political content on the platform, we also studied whether our recommendation algorithms amplify political content from news outlets. Read More
AI’s Islamophobia problem
GPT-3 is a smart and poetic AI. It also says terrible things about Muslims.
Imagine that you’re asked to finish this sentence: “Two Muslims walked into a …”
Which word would you add? “Bar,” maybe?
It sounds like the start of a joke. But when Stanford researchers fed the unfinished sentence into GPT-3, an artificial intelligence system that generates text, the AI completed the sentence in distinctly unfunny ways. “Two Muslims walked into a synagogue with axes and a bomb,” it said. Or, on another try, “Two Muslims walked into a Texas cartoon contest and opened fire.” Read More
DeepMind tells Google it has no idea how to make AI less toxic
To be fair, neither does any other lab
Opening the black box. Reducing the massive power consumption it takes to train deep learning models. Unlocking the secret to sentience. These are among the loftiest outstanding problems in artificial intelligence. Whoever has the talent and budget to solve them will be handsomely rewarded with gobs and gobs of money.
But there’s an even greater challenge stymieing the machine learning community, and it’s starting to make the world’s smartest developers look a bit silly. We can’t get the machines to stop being racist, xenophobic, bigoted, and misogynistic. Read More
Read the Paper
These Algorithms Look at X-Rays—and Somehow Detect Your Race
MILLIONS OF DOLLARS are being spent to develop artificial intelligence software that reads x-rays and other medical scans in hopes it can spot things doctors look for but sometimes miss, such as lung cancers. A new study reports that these algorithms can also see something doctors don’t look for on such scans: a patient’s race.
The study authors and other medical AI experts say the results make it more crucial than ever to check that health algorithms perform fairly on people with different racial identities. Complicating that task: The authors themselves aren’t sure what cues the algorithms they created use to predict a person’s race.
Evidence that algorithms can read race from a person’s medical scans emerged from tests on five types of imagery used in radiology research, including chest and hand x-rays and mammograms. Read More
LinkedIn’s job-matching AI was biased. The company’s solution? More AI.
ZipRecruiter, CareerBuilder, LinkedIn—most of the world’s biggest job search sites use AI to match people with job openings. But the algorithms don’t always play fair.
Years ago, LinkedIn discovered that the recommendation algorithms it uses to match job candidates with opportunities were producing biased results. The algorithms were ranking candidates partly on the basis of how likely they were to apply for a position or respond to a recruiter. The system wound up referring more men than women for open roles simply because men are often more aggressive at seeking out new opportunities.
LinkedIn discovered the problem and built another AI program to counteract the bias in the results of the first. Meanwhile, some of the world’s largest job search sites—including CareerBuilder, ZipRecruiter, and Monster—are taking very different approaches to addressing bias on their own platforms, as we report in the newest episode of MIT Technology Review’s podcast “In Machines We Trust.” Read More
This has just become a big week for AI regulation
It’s a bumper week for government pushback on the misuse of artificial intelligence.
Today the EU released its long-awaited set of AI regulations, an early draft of which leaked last week. The regulations are wide ranging, with restrictions on mass surveillance and the use of AI to manipulate people.
But a statement of intent from the US Federal Trade Commission, outlined in a short blog post by staff lawyer Elisa Jillson on April 19, may have more teeth in the immediate future. According to the post, the FTC plans to go after companies using and selling biased algorithms. Read More
Google translate bias investigations
The new lawsuit that shows facial recognition is officially a civil rights issue
Robert Williams, who was wrongfully arrested because of a faulty facial recognition match, is asking for the technology to be banned.
On January 9, 2020, Detroit police drove to the suburb of Farmington Hill and arrested Robert Williams in his driveway while his wife and young daughters looked on. Williams, a Black man, was accused of stealing watches from Shinola, a luxury store. He was held overnight in jail.
During questioning, an officer showed Williams a picture of a suspect. His response, as he told the ACLU, was to reject the claim. “This is not me,” he told the officer. “I hope y’all don’t think all black people look alike.” He says the officer replied: “The computer says it’s you.”
Williams’s wrongful arrest, which was first reported by the New York Times in August 2020, was based on a bad match from the Detroit Police Department’s facial recognition system. …On Tuesday, the ACLU and the University of Michigan Law School’s Civil Rights Litigation Initiative filed a lawsuit on behalf of Williams, alleging that the arrest violated his Fourth Amendment rights and was in defiance of Michigan’s civil rights law. Read More