Monthly Archives: December 2024
I Went to the Premiere of the First Commercially Streaming AI-Generated Movies
Movies are supposed to transport you places. At the end of last month, I was sitting in the Chinese Theater, one of the most iconic movie theaters in Hollywood, in the same complex where the Oscars are held. And as I was watching the movie, I found myself transported to the past, thinking about one of my biggest regrets. When I was in high school, I went to a theater to watch a screening of a movie one of my classmates had made. I was 14 years old, and I reviewed it for the school newspaper. I savaged the film’s special effects, which were done by hand with love and care by someone my own age, and were lightyears better than anything I could do. I had no idea what I was talking about, how special effects were made, or how to review a movie. The student who made the film rightfully hated me, and I have felt bad about what I wrote ever since.
So, 20 years later, I’m sitting in the Chinese Theater watching AI-generated movies in which the directors sometimes cannot make the characters consistently look the same, or make audio sync with lips in a natural-seeming way, and I am thinking about the emotions these films are giving me. The emotion that I feel most strongly is “guilt,” because I know there is no way to write about what I am watching without explaining that these are bad films, and I cannot believe that they are going to be imminently commercially released, and the people who made them are all sitting around me.
Then I remembered that I am not watching student films made with love by an enthusiastic high school student. I am watching films that were made for TCL, the largest TV manufacturer on Earth as part of a pilot program designed to normalize AI movies and TV shows for an audience that it plans to monetize explicitly with targeted advertising and whose internal data suggests that the people who watch its free television streaming network are too lazy to change the channel. I know this is the plan because TCL’s executives just told the audience that this is the plan. – Read More
It’s Surprisingly Easy to Jailbreak LLM-Driven Robots
AI chatbots such as ChatGPT and other applications powered by large language models (LLMs) have exploded in popularity, leading a number of companies to explore LLM-driven robots. However, a new study now reveals an automated way to hack into such machines with 100 percent success. By circumventing safety guardrails, researchers could manipulate self-driving systems into colliding with pedestrians and robot dogs into hunting for harmful places to detonate bombs.
Essentially, LLMs are supercharged versions of the autocomplete feature that smartphones use to predict the rest of a word that a person is typing. LLMs trained to analyze to text, images, and audio can make personalized travel recommendations, devise recipes from a picture of a refrigerator’s contents, and help generate websites.
The extraordinary ability of LLMs to process text has spurred a number of companies to use the AI systems to help control robots through voice commands, translating prompts from users into code the robots can run. For instance, Boston Dynamics’ robot dog Spot, now integrated with OpenAI’s ChatGPT, can act as a tour guide. Figure’s humanoid robots and Unitree’s Go2 robot dog are similarly equipped with ChatGPT.
However, a group of scientists has recently identified a host of security vulnerabilities for LLMs. So-called jailbreaking attacks discover ways to develop prompts that can bypass LLM safeguards and fool the AI systems into generating unwanted content, such as instructions for building bombs, recipes for synthesizing illegal drugs, and guides for defrauding charities. — Read More
Banishing LLM Hallucinations Requires Rethinking Generalization
Despite their powerful chat, coding, and reasoning abilities, Large Language Models (LLMs) frequently hallucinate. Conventional wisdom suggests that hallucinations are a consequence of a balance between creativity and factuality, which can be mitigated, but not eliminated, by grounding the LLM in external knowledge sources. Through extensive systematic experiments, we show that these traditional approaches fail to explain why LLMs hallucinate in practice. Specifically, we show that LLMs augmented with a massive Mixture of Memory Experts (MoME) can easily memorize large datasets of random numbers. We corroborate these experimental findings with a theoretical construction showing that simple neural networks trained to predict the next token hallucinate when the training loss is above a threshold as it usually does in practice when training on internet scale data. We interpret our findings by comparing against traditional retrieval methods for mitigating hallucinations. We use our findings to design a first generation model for removing hallucinations — Lamini-1 — that stores facts in a massive mixture of millions of memory experts that are retrieved dynamically. — Read More
The phony comforts of AI skepticism
At the end of last month, I attended an inaugural conference in Berkeley named the Curve. The idea was to bring together engineers at big tech companies, independent safety researchers, academics, nonprofit leaders, and people who have worked in government to discuss the biggest questions of the day in artificial intelligence:
Does AI pose an existential threat? How should we weigh the risks and benefits of open weights? When, if ever, should AI be regulated? How? Should AI development be slowed down or accelerated? Should AI be handled as an issue of national security? When should we expect AGI?
If the idea was to produce thoughtful collisions between e/accs and decels, the Curve came up a bit short: the conference was long on existential dread, and I don’t think I heard anyone say that AI development should speed up.
… At the moment, no one knows for sure whether the large language models that are now under development will achieve superintelligence and transform the world. And in that uncertainty, two primary camps of criticism have emerged.
The first camp, which I associate with the external critics, holds that AI is fake and sucks. The second camp, which I associate more with the internal critics, believes that AI is real and dangerous. — Read More
Intel’s Death and Potential Revival
In 1980 IBM, under pressure from its customers to provide computers for personal use, not just mainframes, set out to create the IBM PC; given the project’s low internal priority but high external demand they decided to outsource two critical components: Microsoft would provide the DOS operating system, which would run on the Intel 8088 processor.
Those two deals would shape the computing industry for the following 27 years. Given that the point of the personal computer was to run applications, the operating system that provided the APIs for those applications would have unassailable lock-in, leading to Microsoft’s dominance with first DOS and then Windows, which was backwards compatible.
… It follows, then, that if the U.S. wants to make Intel viable, it ideally will not just give out money, but also a point of integration. Given this, if the U.S. is serious about AGI, then the true Manhattan Project — doing something that will be very expensive and not necessarily economically rational — is filling in the middle of the sandwich. Saving Intel, in other words. — Read More
Trust Issues in AI
For a technology that seems startling in its modernity, AI sure has a long history. Google Translate, OpenAI chatbots, and Meta AI image generators are built on decades of advancements in linguistics, signal processing, statistics, and other fields going back to the early days of computing—and, often, on seed funding from the U.S. Department of Defense. But today’s tools are hardly the intentional product of the diverse generations of innovators that came before. We agree with Morozov that the “refuseniks,” as he calls them, are wrong to see AI as “irreparably tainted” by its origins. AI is better understood as a creative, global field of human endeavor that has been largely captured by U.S. venture capitalists, private equity, and Big Tech. But that was never the inevitable outcome, and it doesn’t need to stay that way. — Read More
Hugging Face’s CEO reveals his 6 predictions for the industry next year, including China leading the US
Hugging Face’s CEO predicts the first major AI protest and market disruptions in 2025. His predictions include China leading the AI race, driven by open-source model developments. — Read More
Over ½ of Long Posts on LinkedIn are Likely AI-Generated Since ChatGPT Launched
Have you seen a thought leadership LinkedIn post and wondered if it was AI-generated or human-written? In this study, we looked at the impact of ChatGPT and generative AI tools on the volume of AI content that is being published on LinkedIn.
We have likely all experienced the same feeling on LinkedIn within the last couple of years… seeing a long-form post and suspecting it of being AI-generated but the author is passing it off as their own thought leadership.
In this study, we look at the impact of ChatGPT and other generative AI tools on the volume of AI content that is being published on LinkedIn. — Read More
Veo and Imagen 3: Announcing new video and image generation models on Vertex AI
Generative AI is leading to real business growth and transformation. Among enterprise companies with gen AI in production, 86% report an increase in revenue1, with an estimated 6% growth. That’s why Google is investing in its AI technology with new models like Veo, our most advanced video generation model, and Imagen 3, our highest quality image generation model. … Veo, now available on Vertex AI in private preview, empowers companies to effortlessly generate high-quality videos from simple text or image prompts, while Imagen 3 generates the most realistic and highest quality images from simple text prompts, surpassing previous versions of Imagen in detail, lighting, and artifact reduction. Imagen 3 will be available to all Vertex AI customers starting next week. — Read More