Weapons of Mass Production

Post Malone is having a good month.

The artist was featured on Beyoncé’s new album Cowboy Carter in the song “LEVII’S JEANS.” And in a few weeks, Post Malone will feature again on spring’s other big release—Taylor Swift’s The Tortured Poets Department.

Post Malone’s feature on Tortured Poets comes in a song called “Fortnight,” and the song already leaked online. Well, not actually—but a lot of people were fooled into thinking so. An AI-generated version of “Fortnight” took TikTok by storm last month (it’s actually a banger) and duped everyone into believing the track leaked. — Read More

#strategy

We’re Focusing on the Wrong Kind of AI Apocalypse

Conversations about the future of AI are too apocalyptic. Or rather, they focus on the wrong kind of apocalypse.

There is considerable concern of the future of AI, especially as a number of prominent computer scientists have raised, the risks of Artificial General Intelligence (AGI)—an AI smarter than a human being. They worry that an AGI will lead to mass unemployment or that AI will grow beyond human control—or worse (the movies Terminator and 2001 come to mind).

Discussing these concerns seems important, as does thinking about the much more mundane and immediate threats of misinformation, deep fakes, and proliferation enabled by AI. But this focus on apocalyptic events also robs most of us of our agency. AI becomes a thing we either build or don’t build, and no one outside of a few dozen Silicon Valley executives and top government officials really has any say over. — Read More

#strategy

Inflection’s implosion and ChatGPT’s stall reveal AI’s consumer problem

Last year, Garry Tan, one of the most successful tech investors in Silicon Valley, questioned the business case for consumer-facing AI technology like ChatGPT.

… Less than a year later, his concerns are coming true.

This week, startup Inflection AI partially imploded. The startup’s two main founders decamped to Microsoft, taking a sizable team with them. — Read More

#strategy

The Global AI Talent Tracker 2.0

Since launching our talent tracker in 2020, artificial intelligence (AI) has taken the world by storm. Ostensible breakthroughs in large language models and machine learning methods, as well as staggering improvements in compute capabilities, have made the power and potential of AI demonstrably clear. 

While companies and institutions are racing to monetize the power of AI, the prospect of its full potential is also giving pause to governments around the world. Much uncertainty centers on how to balance AI’s power to solve a range of economic and social problems with curtailing the downsides of its potential.

But what’s certain is that a large chunk of the tech world’s capital and talent will be deployed toward bringing AI applications to the real world. If anything, the competition among countries in this arena will be fiercer than ever—and much of that competition will be over the indispensable input of an AI ecosystem: talent.  — Read More

#strategy

AI and the Evolution of Social Media

Oh, how the mighty have fallen. A decade ago, social media was celebrated for sparking democratic uprisings in the Arab world and beyond. Now front pages are splashed with stories of social platforms’ role in misinformation, business conspiracymalfeasance, and risks to mental health. In a 2022 survey, Americans blamed social media for the coarsening of our political discourse, the spread of misinformation, and the increase in partisan polarization.

Today, tech’s darling is artificial intelligence. Like social media, it has the potential to change the world in many ways, some favorable to democracy. But at the same time, it has the potential to do incredible damage to society.

There is a lot we can learn about social media’s unregulated evolution over the past decade that directly applies to AI companies and technologies. These lessons can help us avoid making the same mistakes with AI that we did with social media.

In particular, five fundamental attributes of social media have harmed society. AI also has those attributes. Note that they are not intrinsically evil. They are all double-edged swords, with the potential to do either good or ill. The danger comes from who wields the sword, and in what direction it is swung. This has been true for social media, and it will similarly hold true for AI. In both cases, the solution lies in limits on the technology’s use. — Read More

#strategy

Nearly a third of consumers think AI has improved workplace productivity

Artificial Intelligence (AI) is a tricky subject! Some feel it’s making their life much easier, while others feel it will impact society positively. Some feel it needs to be regulated and some feel brands need to be transparent about how they use it. Love it or hate it, you just cannot ignore it. In this piece, we’re asking consumers how they feel the technology has impacted productivity in their workplace.

A recent YouGov survey asked consumers across 17 international markets to what extent they feel AI systems like ChatGPT and Bard have improved or hindered overall productivity in their workplace over the last year. — Read More

#strategy

How Public AI Can Strengthen Democracy

With the world’s focus turning to misinformation,  manipulation, and outright propaganda ahead of the 2024 U.S. presidential election, we know that democracy has an AI problem. But we’re learning that AI has a democracy problem, too. Both challenges must be addressed for the sake of democratic governance and public protection.

Just three Big Tech firms (Microsoft, Google, and Amazon) control about two-thirds of the global market for the cloud computing resources used to train and deploy AI models. They have a lot of the AI talent, the capacity for large-scale innovation, and face few public regulations for their products and activities.

The increasingly centralized control of AI is an ominous sign for the co-evolution of democracy and technology. When tech billionaires and corporations steer AI, we get AI that tends to reflect the interests of tech billionaires and corporations, instead of the general public or ordinary consumers.

To benefit society as a whole we also need strong public AI as a counterbalance to corporate AI, as well as stronger democratic institutions to govern all of AI. — Read More

#strategy

Aggregator’s AI Risk

A recurring theme on Stratechery is that the only technology analogous to the Internet’s impact on humanity is the printing press: Johannes Gutenberg’s invention in 1440 drastically reduced the marginal cost of printing books, dramatically increasing the amount of information that could be disseminated.

Of course you still had to actually write the book, and set the movable type in the printing press; this, though, meant we had the first version of the classic tech business model: the cost to create a book was fixed, but the potential revenue from printing a book — and overall profitability — was a function of how many copies you could sell. Every additional copy increased the leverage on the up-front costs of producing the book in the first place, improving the overall profitability; this, by extension, meant there were strong incentives to produce popular books.

… In this view the Internet is the final frontier, and not just because the American West was finally settled: on the Internet there are, or at least were, no rules, and not just in the legalistic sense; there were also no more economic rules as understood in the world of the printing press. Publishing and distribution were now zero marginal cost activities, just like consumption: you didn’t need a printing press. — Read More

#strategy

How Google lost its way

Just two months after Google launched Gemini, its flashy new AI model, the company revealed that it had already built a better version. Gemini 1.5, Google said, was bigger, faster, and more capable than its predecessor. The February 15 announcement, outlined in a giddy 1,600-word blog post replete with sizzle reels, prompted buzzy coverage among AI researchers and the tech press.

For a few hours, anyway.

Later that day, OpenAI introduced Sora, a tool that generates videos up to 60 seconds long based on text prompts. The rapturous response was immediate. CEO Sam Altman took prompt requests from X users and posted the results in real time. Words like “eye-popping” and “shockingly powerful” were thrown around, while researchers mused about the threat to Hollywood and the potential for deepfakery. — Read More

#strategy

The current state of artificial intelligence generative language models is more creative than humans on divergent thinking tasks

The emergence of publicly accessible artificial intelligence (AI) large language models such as ChatGPT has given rise to global conversations on the implications of AI capabilities. Emergent research on AI has challenged the assumption that creative potential is a uniquely human trait thus, there seems to be a disconnect between human perception versus what AI is objectively capable of creating. Here, we aimed to assess the creative potential of humans in comparison to AI. In the present study, human participants (N = 151) and GPT-4 provided responses for the Alternative Uses Task, Consequences Task, and Divergent Associations Task. We found that AI was robustly more creative along each divergent thinking measurement in comparison to the human counterparts. Specifically, when controlling for fluency of responses, AI was more original and elaborate. The present findings suggest that the current state of AI language models demonstrate higher creative potential than human respondents. — Read More

#strategy