How AI Will Completely Dominate the Animation Industry In Less Than 5 Years

If you’re looking to get into animation as a career, you have less than 5 years.

Thinking of animation as a career? You have less than half a decade to do something meaningful.

Why?

  1. DALL-E 2 and other AI art models can now produce a near-infinite variety of illustrations using a simple text prompt. By 2025, they’ll outperform human artists on every metric.
  2. AI animation models already exist that can take a static illustration and “imagine” different movements, poses, and frames. You can make the Mona Lisa smile, laugh, or cry — and there’s nothing stopping you from doing that to other images, too.
  3. AI video models are right around the corner. Soon, studios will be able to create smooth videos of any framerate with nothing more than a text prompt. Short films will be next.

Read More

#vfx

An AI program voiced Darth Vader in ‘Obi-Wan Kenobi’ so James Earl Jones could finally retire

A startup called Respeecher recreated the actor’s voice as it was in 1977.

After 45 years of voicing one of the most iconic characters in cinema history, James Earl Jones has said goodbye to Darth Vader. At 91, the legendary actor recently told Disney he was “looking into winding down this particular character.” That forced the company to ask itself how do you even replace Jones? The answer Disney eventually settled on, with the actor’s consent, involved an AI program.

If you’ve seen any of the recent Star Wars shows, you’ve heard the work of Respeecher. It’s a Ukrainian startup that uses archival recordings and a “proprietary AI algorithm” to create new dialogue featuring the voices of “performers from long ago.” In the case of Jones, the company worked with Lucasfilm to recreate his voice as it had sounded when film audiences first heard Darth Vader in 1977. Read More

#vfx

Star Wars: James Earl Jones steps back from Darth Vader role

James Earl Jones is the voice behind legendary Star Wars’ villain Darth Vader, but it seems the 91-year-old has finally hung up his helmet.

In an interview with Vanity Fair, Star Wars sound supervising editor Matthew Wood said Jones “was looking into winding down this… character”.

Jones’s voice was remastered from the original Star Wars films for recent Disney+ series Obi-Wan Kenobi.

Some of Jones’s archival voice recordings were also used.

For future Star Wars projects, Jones has reportedly granted permission for Disney and Lucasfilm to use artificial intelligence and archival recordings to recreate his voice. Read More

#vfx

This guy is using AI to make a movie — and you can help decide what happens next

CNN — “Salt” resembles many science-fiction films from the ’70s and early ‘80s, complete with 35mm footage of space freighters and moody alien landscapes. But while it looks like a throwback, the way it was created points to what could be a new frontier for making movies.

“Salt” is the brainchild of Fabian Stelzer. He’s not a filmmaker, but for the last few months he’s been largely relying on artificial intelligence tools to create this series of short films, which he releases roughly every few weeks on Twitter.

Stelzer creates images with image-generation tools such as Stable Diffusion, Midjourney and DALL-E 2. He makes voices mostly using AI voice generation tools such as Synthesia or Murf. And he uses GPT-3, a text-generator, to help with the script writing.

There’s an element of audience participation, too. After each new installment, viewers can vote on what should happen next. Stelzer takes the results of these polls and incorporates them into the plot of future films, which he can spin up more quickly than a traditional filmmaker might since he’s using these AI tools. Read More

#vfx

PP-Matting: High-Accuracy Natural Image Matting

Natural image matting is a fundamental and challenging computer vision task. It has many applications in image editing and composition. Recently, deep learning-based approaches have achieved great improvements in image matting. However, most of them require a user-supplied trimap as an auxiliary input, which limits the matting applications in the real world. Although some trimap-free approaches have been proposed, the matting quality is still unsatisfactory compared to trimap-based ones. Without the trimap guidance, the matting models suffer from foreground-background ambiguity easily, and also generate blurry details in the transition area. In this work, we propose PP-Matting, a trimap-free architecture that can achieve high-accuracy natural image matting. Our method applies a high-resolution detail branch (HRDB) that extracts fine-grained details of the foreground with keeping feature resolution unchanged. Also, we propose a semantic context branch (SCB) that adopts a semantic segmentation subtask. It prevents the detail prediction from local ambiguity caused by semantic context missing. In addition, we conduct extensive experiments on two well-known benchmarks: Composition-1k and Distinctions-646. The results demonstrate the superiority of PP-Matting over previous methods. Furthermore, we provide a qualitative evaluation of our method on human matting which shows its outstanding performance in the practical application. Read More

#image-recognition, #vfx

You can (sort of) generate art like Dall-E with TikTok’s latest filter

If

you’re still on the waiting list to try out DALL-E and you just want a quick peek at the kind of technology that powers it, you might want to open up TikTok.

TikTok’s latest filter may have been around for a few days now, but we first noticed its new A.I. text-to-image generator filter on Sunday. It’s called AI Greenscreen, and it lets you generate painterly style images based on words you input. And the images you generate can become the background of your TikTok videos, like a green screen. Read More

#image-recognition, #vfx

Soul Machines Announces New Entertainment Division

Partners with Nicklaus Companies to launch inaugural Digital Twin of Jack Nicklaus to engage with fans and brands to bring interactive sporting and entertainment experiences online

Soul Machines, the groundbreaking company pioneering the creation of autonomously animated Digital People in the metaverse and the digital worlds of today, announced today the launch of a new Entertainment division with the goal of creating unique and highly personalized experiences redefining fan engagement and entertainment enterprise. On the heels of a recent US$70 million Series B1 round (led by new investor SoftBank Vision Fund 2), this new business division will launch its inaugural Digital Person – an avatar of legendary American professional golfer Jack Nicklaus through a partnership with the Nicklaus Companies. Read More

#vfx

The AI that creates any picture you want, explained

Read More
#image-recognition, #videos, #vfx

Google’s New AI: Flying Through Virtual Worlds! 

Read More
#big7, #image-recognition, #videos, #vfx

ZooBuilder: 2D and 3D Pose Estimation for Quadrupeds

Read More

#image-recognition, #vfx, #videos