How to Create an AI Generated Video with ChatGPT, Synthesia, and Descript

Learn how we created an AI generated video with a ChatGPT script, a Synthesia avatar and voice, and stock footage from Descript.

There is a lot of buzz around new and exciting artificial intelligence (AI) and machine learning (ML) tools for video production and video creation. So, I wanted to see first-hand how some of these tools perform! As an experiment, I set out to create a high quality video using generative AI in less than 15 minutes. Read More

#vfx

Andrew Ng Weighs In on Call for Pause

1/The call for a 6 month moratorium on making AI progress beyond GPT-4 is a terrible idea. I’m seeing many new applications in education, healthcare, food, … that’ll help many people. Improving GPT-4 will help. Lets balance the huge value AI is creating vs. realistic risks.

2/There is no realistic way to implement a moratorium and stop all teams from scaling up LLMs, unless governments step in. Having governments pause emerging technologies they don’t understand is anti-competitive, sets a terrible precedent, and is awful innovation policy.

Read More

#trust

The Path From APIs to Containers

Explore how microservices fueled the journey from APIs to containers and paved the way for enhanced API development and software integration.

In recent years, the rise of microservices has drastically changed the way we build and deploy software. The most important aspect of this shift has been the move from traditional API architectures driven by monolithic applications to containerized microservices. This shift not only improved the scalability and flexibility of our systems, but it has also given rise to new ways of software development and deployment approaches. 

In this article, we will explore the path from APIs to containers and examine how microservices have paved the way for enhanced API development and software integration. Read More

#devops

Cerebras releases seven large language models for generative AI, trained on its specialized hardware

Artificial intelligence chipmaker Cerebras Systems Inc. today announced it has trained and now released seven GPT-based large language models for generative AI, making them available to the wider research community.

The new LLMs are notable as they are the first to be trained using CS-2 systems in the Cerebras Andromeda AI supercluster, which are powered by the Cerebras WSE-2 chip that is specifically designed to run AI software. In other words, they’re among the first LLMs to be trained without relying on graphics processing unit-based systems. Cerebras said it’s sharing not only the models, but also the weights and training recipe that was used, via a standard Apache 2.0 license. Read More

#chatbots

Transformers are Sample-Efficient World Models

Deep reinforcement learning agents are notoriously sample inefficient, which considerably limits their application to real-world problems. Recently, many model-based methods have been designed to address this issue, with learning in the imagination of a world model being one of the most prominent approaches. However, while virtually unlimited interaction with a simulated environment sounds appealing, the world model has to be accurate over extended periods of time. Motivated by the success of Transformers in sequence modeling tasks, we introduce IRIS, a data-efficient agent that learns in a world model composed of a discrete autoencoder and an autoregressive Transformer. With the equivalent of only two hours of gameplay in the Atari 100k benchmark, IRIS achieves a mean human normalized score of 1.046, and outperforms humans on 10 out of 26 games, setting a new state of the art for methods without lookahead search. To foster future research on Transformers and world models for sample-efficient reinforcement learning, we release our code and models at https://github.com/eloialonso/iris.

Read More

#reinforcement-learning