Prominent computer scientists fear that AI could trigger human extinction. It’s time to have a real conversation about the realistic risks.
Last week, safe.org asserted that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” This statement was signed by AI scientists who I really respect including Yoshua Bengio and Geoffrey Hinton. It received widespread media coverage.
I have to admit that I struggle to see how AI could pose any meaningful risk for our extinction. AI has risks like bias, fairness, inaccurate outputs, job displacement, and concentration of power. But I see AI’s net impact as a massive contribution to society. It’s saving lives by improving healthcare and making cars safer, improving education, making healthy food and numerous other goods and services more affordable, and democratizing access to information. I don’t understand how it can lead to human extinction. — Read More