AI is going to eliminate way more jobs than anyone realizes

Atidal wave is about to crash into the global economy. 

The rise of artificial intelligence has captured our imagination for decades, in whimsical movies and sober academic texts. Despite this speculation, the emergence of public, easy-to-use AI tools over the past year has been a jolt, like the future arrived years ahead of schedule. Now this long-expected, all-too-sudden technological revolution is ready to upend the economy. 

A March Goldman Sachs report found over 300 million jobs around the world could be disrupted by AI, and the global consulting firm McKinsey estimated at least 12 million Americans would change to another field of work by 2030. A “gale of creative destruction,” as economist Joseph Schumpeter once described it, will blow away countless firms and breathe life into new industries. It won’t be all bleak: Over the coming decades, nongenerative and generative AI are estimated to add between $17 trillion and $26 trillion to the global economy. And crucially, many of the jobs that will be lost will be replaced by new ones. — Read More

#strategy

From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models

Language models (LMs) are pretrained on diverse data sources, including news, discussion forums, books, and online encyclopedias. A significant portion of this data includes opinions and perspectives which, on one hand, celebrate democracy and diversity of ideas, and on the other hand are inherently socially biased. Our work develops new methods to (1) measure political biases in LMs trained on such corpora, along social and economic axes, and (2) measure the fairness of downstream NLP models trained on top of politically biased LMs. We focus on hate speech and misinformation detection, aiming to empirically quantify the effects of political (social, economic) biases in pretraining data on the fairness of high-stakes social-oriented tasks. Our findings reveal that pretrained LMs do have political leanings that reinforce the polarization present in pretraining corpora, propagating social biases into hate speech predictions and misinformation detectors. We discuss the implications of our findings for NLP research and propose future directions to mitigate unfairness. — Read More

#bias