The algorithms that detect hate speech online are biased against black people

Platforms like FacebookYouTube, and Twitter are banking on developing artificial intelligence technology to help stop the spread of hateful speech on their networks. The idea is that complex algorithms that use natural language processing will flag racist or violent speech faster and better than human beings possibly can. Doing this effectively is more urgent than ever in light of recent mass shootings and violence linked to hate speech online.

But two new studies show that AI trained to identify hate speech may actually end up amplifying racial bias. In one study, researchers found that leading AI models for processing hate speech were one-and-a-half times more likely to flag tweets as offensive or hateful when they were written by African Americans, and 2.2 times more likely to flag tweets written in African American English (which is commonly spoken by black people in the US). Another study found similar widespread evidence of racial bias against black speech in five widely used academic data sets for studying hate speech that totaled around 155,800 Twitter posts. Read More

#bias, #nlp

The AI Renaissance portrait generator isn't great at painting people of color

Surprise! Artificial intelligence-generated portraits based off artwork from 15th century Europe… kind of suck at depicting people of color.

Because we’re apparently always ready to hand over our photos for the sake of a trend, the internet’s current obsession is an AI portrait generator that deconstructs your selfies and rebuilds them as Renaissance and Baroque portraits.

Created by researchers at the MIT-IBM Watson AI Lab, AI Portrait Ars is a fun way to see how you would have been perceived if you lived in another time period.

“Portraits interpret the external beauty, social status, and then go beyond our body and face,” its creators wrote in the site’s “Why” section. “A portrait becomes a psychological analysis and a deep reflection on our existence.”

Unless, apparently, you’re not white.  Read More

#bias, #image-recognition

Welcome To The Machine Learning Biases That Still Exist In 2019

With machine learning, the world relies on technology for recommendations recognition systems. But a lot of these systems are corrupted because they have a certain bias associated with them and are hence not accurate with their functioning. Human Biases That Can Result Into ML Biases include: Reporting Bias/Sample Bias, Prejudice Bias, Measurement Bias, Automation Bias, Group Attribution Bias, and Algorithm Bias, among others. What Can Be Done To Prevent Biases? Read More

#bias, #machine-learning