FRONTLINE: In the Age of AI

FRONTLINE investigates the promise and perils of artificial intelligence, from fears about work and privacy to rivalry between the U.S. and China. The documentary traces a new industrial revolution that will reshape and disrupt our lives, our jobs and our world, and allow the emergence of the surveillance society. Read More (Watch the Video)

#artificial-intelligence, #videos

Artificial Intelligence Can Be Biased. Here’s What You Should Know.

Artificial intelligence has already started to shape our lives in ubiquitous and occasionally invisible ways. In its new documentary, In The Age of AI, FRONTLINE examines the promise and peril this technology. AI systems are being deployed by hiring managerscourts, law enforcement, and hospitals — sometimes without the knowledge of the people being screened. And while these systems were initially lauded for being more objective than humans, it’s fast becoming clear that the algorithms harbor bias, too. Read More

#bias

Remember that scary AI text-generator that was too dangerous to release? It’s out now

OpenAI today published the final model in its staged release for GPT-2, the spooky text generator the AI community’s been talking about all year.

GPT-2 uses machine learning to generate novel text based on a limited input. Basically, you can type a few sentences about anything you like and the AI will spit out some ‘related’ text. Unlike most ‘text generators’ it doesn’t output pre-written strings. GPT-2 makes up text that didn’t previously exist– at least according to OpenAI‘s research paper. Read More

#news-summarization, #nlp

Language Models are Unsupervised Multitask Learners

Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset – matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations. Read More

#news-summarization, #nlp

Results of the NIPS Adversarial Vision Challenge 2018

The winners of the NIPS Adversarial Vision Challenge 2018 have been determined. Overall more than 400 participants submitted more than 3000 models and attacks. This year the competition focused on real-world scenarios in which attacks have low-volume query access to models (up to 1000 queries per sample). The models only returned their final decision but not gradients nor confidence scores. This mimics a typical threat scenario for deployed Machine Learning system and was supposed to push the development of efficient decision-based attacks as well as more robust models. Read More

#adversarial

How Machine Learning Pushes Us to Define Fairness

Bias is machine learning’s original sin. It’s embedded in machine learning’s essence: the system learns from data, and thus is prone to picking up the human biases that the data represents. For example, an ML hiring system trained on existing American employment is likely to “learn” that being a woman correlates poorly with being a CEO.

Cleaning the data so thoroughly that the system will discover no hidden, pernicious correlations can be extraordinarily difficult. Even with the greatest of care, an ML system might find biased patterns so subtle and complex that they hide from the best-intentioned human attention. Hence the necessary current focus among computer scientists, policy makers, and anyone concerned with social justice on how to keep bias out of AI.

Yet machine learning’s very nature may also be bringing us to think about fairness in new and productive ways. Read More

#ethics