Star Wars: James Earl Jones steps back from Darth Vader role

James Earl Jones is the voice behind legendary Star Wars’ villain Darth Vader, but it seems the 91-year-old has finally hung up his helmet.

In an interview with Vanity Fair, Star Wars sound supervising editor Matthew Wood said Jones “was looking into winding down this… character”.

Jones’s voice was remastered from the original Star Wars films for recent Disney+ series Obi-Wan Kenobi.

Some of Jones’s archival voice recordings were also used.

For future Star Wars projects, Jones has reportedly granted permission for Disney and Lucasfilm to use artificial intelligence and archival recordings to recreate his voice. Read More

#vfx

This guy is using AI to make a movie — and you can help decide what happens next

CNN — “Salt” resembles many science-fiction films from the ’70s and early ‘80s, complete with 35mm footage of space freighters and moody alien landscapes. But while it looks like a throwback, the way it was created points to what could be a new frontier for making movies.

“Salt” is the brainchild of Fabian Stelzer. He’s not a filmmaker, but for the last few months he’s been largely relying on artificial intelligence tools to create this series of short films, which he releases roughly every few weeks on Twitter.

Stelzer creates images with image-generation tools such as Stable Diffusion, Midjourney and DALL-E 2. He makes voices mostly using AI voice generation tools such as Synthesia or Murf. And he uses GPT-3, a text-generator, to help with the script writing.

There’s an element of audience participation, too. After each new installment, viewers can vote on what should happen next. Stelzer takes the results of these polls and incorporates them into the plot of future films, which he can spin up more quickly than a traditional filmmaker might since he’s using these AI tools. Read More

#vfx

PP-Matting: High-Accuracy Natural Image Matting

Natural image matting is a fundamental and challenging computer vision task. It has many applications in image editing and composition. Recently, deep learning-based approaches have achieved great improvements in image matting. However, most of them require a user-supplied trimap as an auxiliary input, which limits the matting applications in the real world. Although some trimap-free approaches have been proposed, the matting quality is still unsatisfactory compared to trimap-based ones. Without the trimap guidance, the matting models suffer from foreground-background ambiguity easily, and also generate blurry details in the transition area. In this work, we propose PP-Matting, a trimap-free architecture that can achieve high-accuracy natural image matting. Our method applies a high-resolution detail branch (HRDB) that extracts fine-grained details of the foreground with keeping feature resolution unchanged. Also, we propose a semantic context branch (SCB) that adopts a semantic segmentation subtask. It prevents the detail prediction from local ambiguity caused by semantic context missing. In addition, we conduct extensive experiments on two well-known benchmarks: Composition-1k and Distinctions-646. The results demonstrate the superiority of PP-Matting over previous methods. Furthermore, we provide a qualitative evaluation of our method on human matting which shows its outstanding performance in the practical application. Read More

#image-recognition, #vfx

You can (sort of) generate art like Dall-E with TikTok’s latest filter

If

you’re still on the waiting list to try out DALL-E and you just want a quick peek at the kind of technology that powers it, you might want to open up TikTok.

TikTok’s latest filter may have been around for a few days now, but we first noticed its new A.I. text-to-image generator filter on Sunday. It’s called AI Greenscreen, and it lets you generate painterly style images based on words you input. And the images you generate can become the background of your TikTok videos, like a green screen. Read More

#image-recognition, #vfx

Soul Machines Announces New Entertainment Division

Partners with Nicklaus Companies to launch inaugural Digital Twin of Jack Nicklaus to engage with fans and brands to bring interactive sporting and entertainment experiences online

Soul Machines, the groundbreaking company pioneering the creation of autonomously animated Digital People in the metaverse and the digital worlds of today, announced today the launch of a new Entertainment division with the goal of creating unique and highly personalized experiences redefining fan engagement and entertainment enterprise. On the heels of a recent US$70 million Series B1 round (led by new investor SoftBank Vision Fund 2), this new business division will launch its inaugural Digital Person – an avatar of legendary American professional golfer Jack Nicklaus through a partnership with the Nicklaus Companies. Read More

#vfx

The AI that creates any picture you want, explained

Read More
#image-recognition, #videos, #vfx

Google’s New AI: Flying Through Virtual Worlds! 

Read More
#big7, #image-recognition, #videos, #vfx

ZooBuilder: 2D and 3D Pose Estimation for Quadrupeds

Read More

#image-recognition, #vfx, #videos

MuZero with Self-competition for Rate Control inVP9 Video Compression

Video streaming usage has seen a significant rise as entertainment, education, and business increasingly rely on online video. Optimizing video compression has the potential to increase access and quality of content to users, and reduce energy use and costs overall. In this paper, we present an application of the MuZero algorithm to the challenge of video compression. Specifically, we target the problem of learning a rate control policy to select the quantization parameters (QP) in the encoding process of libvpx, an open source VP9 video compression library widely used by popular video-on-demand (VOD) services. We treat this as a sequential decision making problem to maximize the video quality with an episodic constraint imposed by the target bitrate. Notably, we introduce a novel self-competition based reward mechanism to solve constrained RL with variable constraint satisfaction difficulty, which is challenging for existing constrained RL >methods. We demonstrate that the MuZero-based rate control achieves an average 6.28% reduction in size of the compressed videos for the same delivered video quality level (measured as PSNR BD-rate) compared to libvpx’s two-pass VBR rate control policy, while having better constraint satisfaction behavior. Read More

#vfx

Why The Andy Warhol Diaries Recreated the Artist’s Voice With AI

The filmmakers had under four minutes of audio to work with. And yes, they considered the ethical concerns.

BACK IN 1982, Andy Warhol was, somewhat infamously, turned into a robot. The machine was made by a Disney Imagineering veteran for a project that never really took off, but Warhol liked his animatronic self. “Machines have less problems,” he once said. “I’d like to be a machine, wouldn’t you?” The artist, who died in 1987, was a master of his own cult of personality, and the robot was practically a manifestation of how the world perceived him: meticulously crafted, if a bit rigid and monotone in his conversational style.

… Even still, using an AI voice to speak for a beloved cultural figure—or anyone, really—isn’t without ethical quandaries. Rossi was already editing The Andy Warhol Diaries last summer when controversy erupted around director Morgan Neville using AI to recreate the voice of Anthony Bourdain for his doc Roadrunner. Rossi had been in consultation with the Andy Warhol Foundation about the AI recreation, and the Bourdain doc inspired a disclaimer that now appears a few minutes into Diaries stating that the voice was created with the Foundation’s permission. “When Andrew shared the idea of using an AI voice, I thought, ‘Wow, this is as bold as it is smart,’” says Michael Dayton Hermann, the foundation’s head of licensing. Read More

#nlp, #vfx