OpenAI Is Bringing an AI-Driven Feature-Length Animated Movie to Cannes

You knew it was bound to happen, and now, it has. The Wall Street Journal reports that OpenAI is lending its services to the production of a feature-length animated film called Critterz, which is aiming to be done in time for next year’s Cannes Film Festival. That would put its production time at nine months, which is unheard of for a feature-length animated film, but that’s because it’ll be created using AI.

According to the paper, using OpenAI’s resources, production companies Vertigo Films and Native Foreign will hire actors to voice characters created by feeding original drawings into generative AI software. The entire film is expected to cost less than $30 million and will only take about 30 people to complete. — Read More

#vfx

RL-as-a-Service will outcompete AGI companies (and that’s good)

Companies drive AI development today. There’s two stories you could tell about the mission of an AI company:

AGI: AI labs will stop at nothing short of Artificial General Intelligence. With enough training and iteration AI will develop a general ability to solve any (feasible) task. We can leverage this general intelligence to solve any problem, including how to make a profit. 

Reinforcement Learning-as-a-Service (RLaaS)[1]AI labs have an established process for training language models to attain high performance on clean datasets. By painstakingly creating benchmarks for problems of interest, they can solve any given problem with RL leveraging language models as a general-purpose prior. This is essentially a version of the CAIS model. — Read More

#strategy

The Last Programmers

We’re witnessing the final generation of people who translate ideas into code by hand.XXXXI quit my job at Amazon in May to join a startup called Icon

… I felt like I was reaching the ceiling of what I could learn about AI and building good products within Amazon’s constraints. That’s why I joined Icon. At Icon, we move at a completely different speed. We ship features in days that would have taken Amazon months to approve.

… The interesting part is watching how my teammates work. One of them hasn’t looked at actual code in weeks. Instead, he writes design documents in plain English and trusts AI to handle the implementation. When something needs fixing, he edits the document, not the code.

It made me realize something profound: we’re living through the end of an era where humans translate ideas into code by hand. Within a few years, that skill will be as relevant as knowing how to shoe a horse. — Read More

#devops

DOGE’s Flops Shouldn’t Spell Doom for AI In Government

Just a few months after Elon Musk’s retreat from his unofficial role leading the Department of Government Efficiency (DOGE), we have a clearer picture of his vision of government powered by artificial intelligence, and it has a lot more to do with consolidating power than benefitting the public. Even so, we must not lose sight of the fact that a different administration could wield the same technology to advance a more positive future for AI in government.

To most on the American left, the DOGE end game is a dystopic vision of a government run by machines that benefits an elite few at the expense of the people. It includes AI rewriting government rules on a massive scale, salary-free bots replacing human functions and nonpartisan civil service forced to adopt an alarmingly racist and antisemitic Grok AI chatbot built by Musk in his own image. And yet despite Musk’s proclamations about driving efficiency, little cost savings have materialized and few successful examples of automation have been realized. — Read More

#strategy

The Dead Internet Theory: A Survey on Artificial Interactions and the Future of Social Media

The Dead Internet Theory (DIT) suggests that much of today’s internet, particularly social media, is dominated by non-human activity, AI-generated content, and corporate agendas, leading to a decline in authentic human interaction. This study explores the origins, core claims, and implications of DIT, emphasizing its relevance in the context of social media platforms. The theory emerged as a response to the perceived homogenization of online spaces, highlighting issues like the proliferation of bots, algorithmically generated content, and the prioritization of engagement metrics over genuine user interaction. AI technologies play a central role in this phenomenon, as social media platforms increasingly use algorithms and machine learning to curate content, drive engagement, and maximize advertising revenue. While these tools enhance scalability and personalization, they also prioritize virality and consumption over authentic communication, contributing to the erosion of trust, the loss of content diversity, and a dehumanized internet experience. This study redefines DIT in the context of social media, proposing that the commodification of content consumption for revenue has taken precedence over meaningful human connectivity. By focusing on engagement metrics, platforms foster a sense of artificiality and disconnection, underscoring the need for human-centric approaches to revive authentic online interaction and community building. — Read More

#robotics

Don’t Build An AI Safety Movement

Safety advocates are about to change the AI policy debate for the worse. Faced with political adversity, few recent policy wins, and a perceived lack of obvious paths to policy victory, the movement yearns for a different way forward. One school of thought is growing in popularity: to create political incentive to get serious about safety policy, one must ‘build a movement’. That is, one must create widespread salience of AI safety topics and channel it into an organised constituency that puts pressure on policymakers.

Recent weeks are seeing more and more signs of efforts to build a popular movement. In two weeks, AI safety progenitors Eliezer Yudkowsky and Nate Soares are publishing a general-audience book to shore up public awareness and support — with a media tour to boot, I’m sure. PauseAI’s campaigns are growing in popularity and ecosystem support, with a recent UK-based swipe at Google DeepMind drawing national headlines. And successful safety career accelerator MATS is now also in the business of funneling young talent into attempts to build a movement. Now, these efforts are in their very early stages; and might still just stumble on their own. But they point to a broader motivation — one that’s worth seriously discussing now. — Read More

#trust

Why language models hallucinate

Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty. Such “hallucinations” persist even in state-of-the-art systems and undermine trust. We argue that language models hallucinate because the training and evaluation procedures reward guessing over acknowledging uncertainty, and we analyze the statistical causes of hallucinations in the modern training pipeline. Hallucinations need not be mysterious — they originate simply as errors in binary classification. If incorrect statements cannot be distinguished from facts, then hallucinations in pretrained language models will arise through natural statistical pressures. We then argue that hallucinations persist due to the way most evaluations are graded — language models are optimized to be good test-takers, and guessing when uncertain improves test performance. This “epidemic” of penalizing uncertain responses can only be addressed through a socio-technical mitigation: modifying the scoring of existing benchmarks that are misaligned but dominate leaderboards, rather than introducing additional hallucination evaluations. This change may steer the field toward more trustworthy AI systems. — Read More

#nlp