The phony comforts of AI skepticism

At the end of last month, I attended an inaugural conference in Berkeley named the Curve. The idea was to bring together engineers at big tech companies, independent safety researchers, academics, nonprofit leaders, and people who have worked in government to discuss the biggest questions of the day in artificial intelligence:

Does AI pose an existential threat? How should we weigh the risks and benefits of open weights? When, if ever, should AI be regulated? How? Should AI development be slowed down or accelerated? Should AI be handled as an issue of national security? When should we expect AGI?

If the idea was to produce thoughtful collisions between e/accs and decels, the Curve came up a bit short: the conference was long on existential dread, and I don’t think I heard anyone say that AI development should speed up. 

… At the moment, no one knows for sure whether the large language models that are now under development will achieve superintelligence and transform the world. And in that uncertainty, two primary camps of criticism have emerged. 

The first camp, which I associate with the external critics, holds that AI is fake and sucks. The second camp, which I associate more with the internal critics, believes that AI is real and dangerous. — Read More

#strategy

Intel’s Death and Potential Revival

In 1980 IBM, under pressure from its customers to provide computers for personal use, not just mainframes, set out to create the IBM PC; given the project’s low internal priority but high external demand they decided to outsource two critical components: Microsoft would provide the DOS operating system, which would run on the Intel 8088 processor.

Those two deals would shape the computing industry for the following 27 years. Given that the point of the personal computer was to run applications, the operating system that provided the APIs for those applications would have unassailable lock-in, leading to Microsoft’s dominance with first DOS and then Windows, which was backwards compatible.

… It follows, then, that if the U.S. wants to make Intel viable, it ideally will not just give out money, but also a point of integration. Given this, if the U.S. is serious about AGI, then the true Manhattan Project — doing something that will be very expensive and not necessarily economically rational — is filling in the middle of the sandwich. Saving Intel, in other words. — Read More

#nvidia

Trust Issues in AI

For a technology that seems startling in its modernity, AI sure has a long history. Google Translate, OpenAI chatbots, and Meta AI image generators are built on decades of advancements in linguistics, signal processing, statistics, and other fields going back to the early days of computing—and, often, on seed funding from the U.S. Department of Defense. But today’s tools are hardly the intentional product of the diverse generations of innovators that came before. We agree with Morozov that the “refuseniks,” as he calls them, are wrong to see AI as “irreparably tainted” by its origins. AI is better understood as a creative, global field of human endeavor that has been largely captured by U.S. venture capitalists, private equity, and Big Tech. But that was never the inevitable outcome, and it doesn’t need to stay that way. — Read More

#trust