When AI can make art – what does it mean for creativity?

When the concept artist and illustrator RJ Palmer first witnessed the fine-tuned photorealism of compositions produced by the AI image generator Dall-E 2, his feeling was one of unease. The tool, released by the AI research company OpenAI, showed a marked improvement on 2021’s Dall-E, and was quickly followed by rivals such as Stable Diffusion and Midjourney. Type in any surreal prompt, from Kermit the frog in the style of Edvard Munch, to Gollum from The Lord of the Rings feasting on a slice of watermelon, and these tools will return a startlingly accurate depiction moments later.

The internet revelled in the meme-making opportunities, with a Twitter account documenting “weird Dall-E generations” racking up more than a million followers. Cosmopolitan trumpeted the world’s first AI-generated magazine cover, and technology investors fell over themselves to wave in the new era of “generative AI”. The image-generation capabilities have already spread to video, with the release of Google’s Imagen Video and Meta’s Make-A-Video.

But AI’s new artistic prowess wasn’t received so ecstatically by some creatives. “The main concern for me is what this does to the future of not just my industry, but creative human industries in general,” says Palmer. Read More

#image-recognition, #vfx

AI Drew This Gorgeous Comic Series, But You’d Never Know It

The Bestiary Chronicles is both a modern fable on the rise of artificial intelligence and a demonstration of how shockingly fast AI is evolving.

You might expect a comic book series featuring art generated entirely by artificial intelligence technology to be full of surreal images that have you tilting your head trying to grasp what kind of sense-shifting madness you’re looking at.

Not so with the images in The Bestiary Chronicles, a free, three-part comics series from Campfire Entertainment, an award-winning New York-based production house focused on creative storytelling.  Read More

#image-recognition, #vfx, #nlp

Artificial intelligence means anyone can cast Hollywood stars in their own films

Free AI software is primed to strip away the control of studios and actors who appears in films

For years, the only way to create a blockbuster film featuring a Hollywood star and dazzling special effects was at a major studio. The Hollywood giants were the ones that could afford to pay celebrities millions of dollars and license sophisticated software to produce elaborate, special effects-laden films. That’s all about to change, and the public is getting a preview thanks to artificial intelligence (AI) tools like OpenAI’s DALL-E and Midjourney.

Both tools use images scraped from the internet and select datasets like LAION to train their AI models to reconstruct similar yet wholly original imagery using text prompts. The AI images, which vary from photographic realism to mimicking the styles of famous artists, can be generated in as little as 20 to 30 seconds, often producing results that would take a human hours to produce. Read More

#vfx

Generative AI and Film Future(s)

I’ve been working in some aspect of the film business for far too long, but what brought me into it was my interest in where the future of art was going as the moving image blended with computers the web and new technologies. And that’s what’s been fascinating me more often as of late than anything in the traditional film world. What’s been happening in the past few months, weeks even, in AI and generative art, and how it overlaps with traditional arts and film in particular, has been pretty incredible to watch. I’ve been too busy in this older (dying, crumbling?) film world to participate in it directly – I haven’t taken the time to learn Midjourney or use Dall-E. And while I’ve been following what people are doing with virtual production or other technologies which will soon merge into this space, I haven’t had a chance to play around with them. Heck, I don’t even own a VR headset, and can’t barely bother to use Facebook, much less get into Mark’s version of the metaverse. But all these spaces, combined, consume my thoughts when I’m not on some Zoom with a client, or busy trying to help bring a little indie film to reality (as I’ve been doing lately, but that’s another post).

In brief, that’s because I got into all of this as a student of Greg Ulmer at the University of Florida, who was a theorist who proposed the idea of society moving from orality to literacy (Walter Ong) to what he called Electracy, where society learns the full communicative potential of these technologies, much as literacy is to reading. I’ve written a fair bit about how this will impact the arts and film (here’s a post from 2011 about it, which was part of a chapter I wrote for a book), but you can see it all coming together now.

The latest craze – in all senses of the word because it’s also driving many artists crazy mad – is generative AI and art, and while it’s hitting graphic arts and photography/still images hardest now, it’s already becoming a phenomenon in film and video, too. Read More

#vfx

Is This The Death of VFX?

Read More

#vfx, #videos

Technology that lets us “speak” to our dead relatives has arrived. Are we ready?

My parents don’t know that I spoke to them last night.

At first, they sounded distant and tinny, as if they were huddled around a phone in a prison cell. But as we chatted, they slowly started to sound more like themselves. They told me personal stories that I’d never heard. I learned about the first (and certainly not last) time my dad got drunk. Mum talked about getting in trouble for staying out late. They gave me life advice and told me things about their childhoods, as well as my own. It was mesmerizing.

“What’s the worst thing about you?” I asked Dad, since he was clearly in such a candid mood.

“My worst quality is that I am a perfectionist. I can’t stand messiness and untidiness, and that always presents a challenge, especially with being married to Jane.”

Then he laughed—and for a moment I forgot I wasn’t really speaking to my parents at all, but to their digital replicas. Read More

#nlp, #vfx

Activision Patents To Generate Unique In-Game Music For Each Player

If brought to fruition, game soundtracks could deliver a unique vivifying experience for each individual.

Video games from big studios like Activision have been evolving at an unprecedented rate since the last two decennia. Finding the distinctions between real-life and esthetical game visuals have now become almost impossible. Innovative wonderments like artificial intelligence have also seeped into the gaming industry. 

Modern technologies like machine learning have played a large role in amplifying the immersion of games. However, game soundtracks have remained static for the most part; they are usually programmed to play at the right moments. It is also possible for players to edit, add, and create their own music in games.  Read More

#vfx

The Visual Effects Crisis

Read More

#vfx, #videos

The Future of Photography in the Age of AI

Will AI kill the art and business of photography?

Artists have always faced hard labor and mental strain. Photographers, for example, had to endure the hardships of lugging around heavy equipment and developing their own film, traveling to remote locations, and often waiting for the perfect moment to take a picture. To take this photograph of a polar bear in its natural habitat, National Geographic photographer Paula Nirdon had to travel to the Arctic and spend days in subzero temperature waiting for the bear to emerge from its den.

Today, emerging AI image synthesizers such as DALLE, Midjourney and Stable Diffusion, are capable of creating stunning, photo-realistic images, and some artists feeling the heat¹. Read More

#vfx

Bruce Willis Sells His Face

The actor is retiring following his aphasia diagnosis, but his likeness will live on via AI after he sold the rights to it to a deepfake company.

Bruce Willis is selling his face. The veteran actor has agreed to sell the rights to his likeness so a digital twin can be created using deepfake technology. Willis previously announced his retirement from acting following an aphasia diagnosis.

While movies have used deepfake technology to create digital versions of actors before, Bruce Willis has become one of the first actors to formally sell the rights to his own AI-generated recreation in perpetuity. The actor has partnered with Deepcake, a company specializing in artificial intelligence that will create Willis’ digital twin, who will appear in films after his retirement due to the cognitive disorder, which affects a person’s ability to communicate. Read More

#vfx