Conversations about the future of AI are too apocalyptic. Or rather, they focus on the wrong kind of apocalypse.
There is considerable concern of the future of AI, especially as a number of prominent computer scientists have raised, the risks of Artificial General Intelligence (AGI)—an AI smarter than a human being. They worry that an AGI will lead to mass unemployment or that AI will grow beyond human control—or worse (the movies Terminator and 2001 come to mind).
Discussing these concerns seems important, as does thinking about the much more mundane and immediate threats of misinformation, deep fakes, and proliferation enabled by AI. But this focus on apocalyptic events also robs most of us of our agency. AI becomes a thing we either build or don’t build, and no one outside of a few dozen Silicon Valley executives and top government officials really has any say over. — Read More