In the last two years, the AI subfield of natural-language processing has seen enormous progress. For example, a language model developed by the San Francisco–based research lab OpenAI, called GPT-2, has been used to generate fiction, fake news articles, and a practically infinite Choose Your Own Adventure–style text game.
But these kinds of models are essentially massive text-prediction systems that don’t take sense into account, so the sentences they produce are more likely to be superficially fluent than they are to be truly meaningful. It’s hard to tell a model to stick to a particular topic like health care, for example. Yet models like GPT-2 can still be gamed to produce racist and toxic output, making them even less useful. Read More