OpenAI’s Sora video generator appears to have leaked

A group appears to have leaked access to Sora, OpenAI’s video generator, in protest of what it’s calling duplicity and “art washing” on OpenAI’s part.

On Tuesday, the group published a project on the AI dev platform Hugging Face seemingly connected to OpenAI’s Sora API, which isn’t yet publicly available. Using their authentication tokens — presumably from an early access system — the group created a front end that lets users generate videos with Sora. — Read More

#vfx

This AI-generated version of Minecraft may represent the future of real-time video generation

The game was created from clips and keyboard inputs alone, as a demo for real-time interactive video generation.

When you walk around in a version of the video game Minecraft from the AI companies Decart and Etched, it feels a little off. Sure, you can move forward, cut down a tree, and lay down a dirt block, just like in the real thing. If you turn around, though, the dirt block you just placed may have morphed into a totally new environment. That doesn’t happen in Minecraft. But this new version is entirely AI-generated, so it’s prone to hallucinations. Not a single line of code was written.

For Decart and Etched, this demo is a proof of concept. They imagine that the technology could be used for real-time generation of videos or video games more generally. “Your screen can turn into a portal—into some imaginary world that doesn’t need to be coded, that can be changed on the fly. And that’s really what we’re trying to target here,” says Dean Leitersdorf, cofounder and CEO of Decart, which came out of stealth this week. — Read More

#vfx

New Zemeckis film used AI to de-age Tom Hanks and Robin Wright

On Friday, TriStar Pictures released Here, a $50 million Robert Zemeckis-directed film that used real time generative AI face transformation techniques to portray actors Tom Hanks and Robin Wright across a 60-year span, marking one of Hollywood’s first full-length features built around AI-powered visual effects.

The film adapts a 2014 graphic novel set primarily in a New Jersey living room across multiple time periods. Rather than cast different actors for various ages, the production used AI to modify Hanks’ and Wright’s appearances throughout.

The de-aging technology comes from Metaphysic, a visual effects company that creates real time face swapping and aging effects. During filming, the crew watched two monitors simultaneously: one showing the actors’ actual appearances and another displaying them at whatever age the scene required. — Read More

#vfx

HeyGen enables your digital twin to do Zoom calls for you

Video platform HeyGen has added a feature that it claims allows users to send AI-powered digital versions of themselves to Zoom meetings and other live interactions.

The avatars can join one or more meetings simultaneously, 24/7. They are designed to not only look and sound like the people they are representing, buy they will also think, talk and make decisions like them, according to HeyGen.

The HeyGen Interactive Avatar is equipped with OpenAI real-time voice integration, which allows it to hold an intelligent, efficient and timely conversation with any audience. — Read More

#vfx

Meta announces Movie Gen, an AI-powered video generator

A new AI-powered video generator from Meta produces high-definition footage complete with sound, the company announced today. The announcement comes several months after competitor OpenAI unveiled Sora, its text-to-video model — though public access to Movie Gen isn’t happening yet.

Movie Gen uses text inputs to automatically generate new videos, as well as edit existing footage or still images. The New York Times reports that the audio added to videos is also AI-generated, matching the imagery with ambient noise, sound effects, and background music. The videos can be generated in different aspect ratios. — Read More

#image-recognition, #vfx

Lionsgate Inks Deal With AI Firm to Mine Its Massive Film and TV Library

The deal will see Runway train a new AI model on Lionsgate’s film and TV library as the entertainment company uses the tech “to develop cutting-edge, capital-efficient content creation opportunities.”

In a significant move, Lionsgate and the video-focused artificial intelligence research firm Runway have inked a deal that will see Runway train a new generative AI model on Lionsgate content, and will see the entertainment company use the tech as it produces future film and TV projects.

While details are scarce, the companies say that the new model will be “customized to Lionsgate’s proprietary portfolio of film and television content,” and exclusive to the studio. The purpose will be to “help Lionsgate Studios, its filmmakers, directors and other creative talent augment their work.” — Read More

#vfx

How To Balance AI Innovation And Human Creativity In Hollywood Storytelling

As artificial intelligence technology rapidly advances, Hollywood faces a pivotal challenge: integrating AI into the filmmaking process without overshadowing the human creativity that has long been the bedrock of compelling storytelling.

Recent industry disruptions, such as the Screen Actors Guild and Writers Guild of America strikes—which cost nearly $5 billion due to production delays and cancellations—have highlighted the industry’s deep concerns about AI’s impact. With AI spending predicted to reach $886 million in the global film industry in 2024 and 70% of major companies already incorporating AI, the stakes are higher than ever. The question remains: Can AI enhance the industry without undermining workforce stability and the emotional depth that defines entertainment? — Read More

#vfx

‘Hold on to your seats’: how much will AI affect the art of film-making?

The future is here, whether some like it or not, and artificial intelligence is already impacting the film industry. But just how far can, and should, it go?

Last year, Rachel Antell, an archival producer for documentary films, started noticing AI-generated images mixed in with authentic photos. There are always holes or limitations in an archive; in one case, film-makers got around a shortage of images for a barely photographed 19th-century woman by using AI to generate what looked like old photos. Which brought up the question: should they? And if they did, what sort of transparency is required? The capability and availability of generative AI – the type that can produce text, images and video – have changed so rapidly, and the conversations around it have been so fraught, that film-makers’ ability to use it far outpaces any consensus on how.

… So Antell and several colleagues formed the Archival Producers Alliance (APA), a volunteer group of about 300 documentary producers and researchers dedicated to, in part, developing best practices for use of generative AI in factual storytelling. “Instead of being, ‘the house is burning, we’ll never have jobs,’ it’s much more based around an affirmation of why we got into this in the first place,” said Stephanie Jenkins, a founding APA member. Experienced documentary film-makers have “really been wrestling with this”, in part because “there is so much out there about AI that is so confusing and so devastating or, alternatively, a lot of snake oil.” — Read More

#vfx

Hollywood Nightmare? New Streaming Service Lets Viewers Create Their Own Shows Using AI

Generative artificial intelligence is coming for streaming, with the release of a platform dedicated to AI content that allows users to create episodes with a prompt of just a couple of words.

Fable Studio, an Emmy-winning San Francisco startup, on Thursday announced Showrunner, a platform the company says can write, voice and animate episodes of shows it carries. Under the initial release, users will be able to watch AI-generated series and create their own content — complete with the ability to control dialogue, characters and shot types, among other controls. — Read More

#vfx

Tribeca Festival to Debut Short Films Made Using OpenAI

The Tribeca Festival will feature five short films made using technology from OpenAI.

The films will use OpenAI’s Sora, which is a text-to-video model that accepts textual descriptions and generates video clips based on them. This is the first time films using this technology will be showcased at the festival.

… The films will be screened June 15, with a conversation afterwards with the filmmakers.  — Read More

#vfx