The Oscars just declared that AI actors and AI-written scripts can’t win awards

A hot potato: With generative AI becoming more prevalent in society, are we heading toward a future where an AI-created actor or script wins an Oscar? If it does ever happen, it certainly won’t be anytime soon: the Academy of Motion Picture Arts and Sciences has just banned their eligibility for awards.

The Academy clarified rules for two categories related to AI, writes Vanity Fair. The first states that the only acting roles eligible for Oscar nominations are those “demonstrably performed by humans with their consent.” Screenplays, meanwhile, must be human-authored to be eligible.

While this all sounds like something we’ll have to deal with in the future, it’s happening now. — Read More

#vfx

Higgsfield AI for Creative Professionals: A Deep Dive

Higgsfield AI is a generative video model and platform designed for creating high-fidelity, controllable, and stylistically consistent video content from text and image prompts. Unlike many early-generation AI video tools that produce short, often disjointed clips, Higgsfield focuses on solving one of the biggest problems for professional use: consistency. It aims to give creators the ability to maintain the same character, aesthetic, and environment across multiple shots, making it a viable tool for narrative and commercial projects. — Read More

#vfx

Efficient Video Intelligence in 2026

Five years ago, video understanding mostly meant action recognition on Kinetics-400 or short-clip captioning on MSR-VTT. Today, vision-language models reason about hour-long footage, on-device tracking segments any object at 16 FPS on a phone, and a single 100M-parameter encoder can match domain experts across image understanding, dense prediction, and VLM tasks. The shift came from rethinking what a video model needs to do, and from taking deployment constraints seriously.

This post walks through where efficient video intelligence stands in April 2026, following how a video system processes its input from raw frames through spatial perception, long-form temporal understanding, multimodal fusion and reasoning, and the deployment stack that makes any of it shippable.

A note up front: the post leans heavily on research from my own group, including EUPE, the EfficientSAM / Efficient Track Anything / EdgeTAM compression line, LongVUTempoEgoAVUVideoAuto-R1DepthLM, and ParetoQ. I have tried to place each piece against the parallel and competing work in its section, but this is a perspective from inside one research program rather than a neutral survey. — Read More

#vfx

Synchronizing the Senses: Powering Multimodal Intelligence for Video Search

Today’s filmmakers capture more footage than ever to maximize their creative options, often generating hundreds, if not thousands, of hours of raw material per season or franchise. Extracting the vital moments needed to craft compelling storylines from this sheer volume of media is a notoriously slow and punishing process. When editorial teams cannot surface these key moments quickly, creative momentum stalls and severe fatigue sets in.

Meanwhile, the broader search landscape is undergoing a profound transformation. We are moving beyond simple keyword matching toward AI-driven systems capable of understanding deep context and intent. Yet, while these advances have revolutionized text and image retrieval, searching through video, the richest medium for storytelling, remains a daunting “needle in a haystack” challenge.

The solution to this bottleneck cannot rely on a single algorithm. Instead, it demands orchestrating an expansive ensemble of specialized models: tools that identify specific characters, map visual environments, and parse nuanced dialogue.  — Read More

#vfx

Versatile Editing of Video Content, Actions, and Dynamics without Training

Controlled video generation has seen drastic improvements in recent years. However, editing actions and dynamic events, or inserting contents that should affect the behaviors of other objects in real-world videos, remains a major challenge. Existing trained models struggle with complex edits, likely due to the difficulty of collecting relevant training data. Similarly, existing training-free methods are inherently restricted to structure- and motion-preserving edits and do not support modification of motion or interactions. Here, we introduce DynaEdit, a training-free editing method that unlocks versatile video editing capabilities with pretrained text-to-video flow models.  — Read More

#vfx

Val Kilmer Resurrected by AI to Star in ‘As Deep as the Grave’ Movie

Five years prior to his death in 2025Val Kilmer was cast as Father Fintan, a Catholic priest and Native American spiritualist, in “As Deep as the Grave.” But Kilmer, who was battling throat cancer, was too sick to ever make it to set.

… Even though he didn’t shoot a single scene, Voorhees has been able to realize his vision of having Kilmer in the ensemble by using state-of-the-art generative AI. And he’s done it with the cooperation of the late actor’s estate and his daughter Mercedes (Voorhees says Kilmer’s son Jack is also supportive). — Read More

#vfx

5 design skills to sharpen in the AI era

AI is reshaping the way products are made: It’s accelerating exploration, lowering barriers to entry, and widening the circle of who can participate in the design process. In response, teams are honing new skills to meet the moment. In our recent report State of the Designer 2026, we asked the design community which skills matter most to them in the age of AI. Here, we’re sharing what those skills are—and how to perfect them. — Read More

#vfx

Tilly Norwood | Take The Lead (Official Music Video)

Read More
#videos

#vfx

The Capability Maturity Model for AI in Design

Matt Davey, who is Chief Experience Officer at 1Password, created a useful capability maturity model for AI in design. His original model has 5 levels (Limited, Reactive, Developing, Embedded, and Leading), each of which differs along 6 characteristics (Leadership on AI, Strategy & Budgeting, AI Culture & Talent, AI Learning & Enablement, AI Agents & Automation, and AI Product Design). Thus, the model covers both the use of AI within the design process and the use of AI in the resulting product. I recommend you read the full thing, but here is a summary of Davey’s 5 capability maturity levels for AI in design.

As discussed below, I added Maturity Level 6, Symbiotic, for a more complete capability maturity ladder.

For a summary of this article, watch my short overview explainer video (YouTube, 6 min.). — Read More

#devops, #vfx

Netflix Acquires AI Filmmaking Start-Up Founded by Ben Affleck

In a rare acquisition, Netflix has bought InterPositive, a start-up founded by Ben Affleck that makes AI-powered tools for filmmakers.

… While Netflix historically is more often a builder than a buyer, the company said it saw Affleck’s InterPositive as providing a unique set of AI tools that “keeps filmmakers at the center of the process.”  — Read More

#vfx