A GUIDE TO NOT KILLING OR MUTILATING ARTIFICIAL INTELLIGENCE RESEARCH

What’s the fastest way to build a jig-saw puzzle? That was the question posed by Michael Polanyi in 1962. An obvious answer is to enlist help. In what way, then, could the helpers be coordinated most efficiently? If you divided pieces between the helpers, then progress would slow to a crawl. You couldn’t know how to usefully divide the pieces without first solving the puzzle.

Polanyi found it obvious that the fastest way to build a jig-saw puzzle is to let everyone work on it together in full sight of each other. No central authority could accelerate progress. “Under this system,” Polanyi wrote, “each helper will act on his own initiative, by responding to the latest achievements of the others, and the completion of their joint task will be greatly accelerated.” Read More

#artificial-intelligence

The algorithms that detect hate speech online are biased against black people

Platforms like FacebookYouTube, and Twitter are banking on developing artificial intelligence technology to help stop the spread of hateful speech on their networks. The idea is that complex algorithms that use natural language processing will flag racist or violent speech faster and better than human beings possibly can. Doing this effectively is more urgent than ever in light of recent mass shootings and violence linked to hate speech online.

But two new studies show that AI trained to identify hate speech may actually end up amplifying racial bias. In one study, researchers found that leading AI models for processing hate speech were one-and-a-half times more likely to flag tweets as offensive or hateful when they were written by African Americans, and 2.2 times more likely to flag tweets written in African American English (which is commonly spoken by black people in the US). Another study found similar widespread evidence of racial bias against black speech in five widely used academic data sets for studying hate speech that totaled around 155,800 Twitter posts. Read More

#bias, #nlp