How AI Will Completely Dominate the Animation Industry In Less Than 5 Years

If you’re looking to get into animation as a career, you have less than 5 years.

Thinking of animation as a career? You have less than half a decade to do something meaningful.

Why?

  1. DALL-E 2 and other AI art models can now produce a near-infinite variety of illustrations using a simple text prompt. By 2025, they’ll outperform human artists on every metric.
  2. AI animation models already exist that can take a static illustration and “imagine” different movements, poses, and frames. You can make the Mona Lisa smile, laugh, or cry — and there’s nothing stopping you from doing that to other images, too.
  3. AI video models are right around the corner. Soon, studios will be able to create smooth videos of any framerate with nothing more than a text prompt. Short films will be next.

Read More

#vfx

Attention in the Human Brain and Its Applications in ML

Some objects grab our attention when we see them, even when we are not exactly looking for them. How precisely does this happen? And, more importantly, how can we incorporate this phenomena to improve our computer vision models? In this article, I will explain the process of paying attention to salient (i.e. noticeable) objects in the visual scene and their applications in Machine Learning as an AI researcher (or not only from the neuroscience perspective).

Visual perception, saliency, and attention have been active research topics in neuroscience for decades. The discoveries and advancements that these researchers have made have helped AI researchers understand and mimic the process(es) in the human brain. Indeed, saliency and attention are active research topics in the AI community, too. The outcome is a wide spectrum of applications ranging from better language understanding to autonomous driving. But before we can understand the AI perspective on attention, we’ll first have to understand it from the neuroscience perspective. Read More

#human

META Introduces Make-A-Video: An AI system that generates videos from text

Today, we’re announcing Make-A-Video, a new AI system that lets people turn text prompts into brief, high-quality video clips. Make-A-Video builds on Meta AI’s recent progress in generative technology research and has the potential to open new opportunities for creators and artists. The system learns what the world looks like from paired text-image data and how the world moves from video footage with no associated text. As part of our continued commitment to open science, we’re sharing details in a research paper and plan to release a demo experience. Read More

#big7, #image-recognition, #nlp