Beginner’s Guide to Diffusion Models

An intuitive understanding of how AI-generated art is made by Stable Diffusion, Midjourney, or DALL-E

Recently, there has been an increased interest in OpenAI’s DALL-E, Stable Diffusion (the free alternative of DALL-E), and Midjourney (hosted on a Discord server). While AI-generated art is very cool, what is even more captivating is how it works in the first place. In the last section, I will include some resources for anyone to get started in this AI art space as well.

So how do these technologies work? It uses something called a latent diffusion model, and the idea behind it is actually ingenious. Read More

#diffusion

Extracting Training Data from Diffusion Models

Image diffusion models such as DALL-E 2, Imagen, and Stable Diffusion have attracted significant attention due to their ability to generate high-quality synthetic images. In this work, we show that diffusion models memorize individual images from their training data and emit them at generation time. With a generate-and-filter pipeline, we extract over a thousand training examples from state-of-the-art models, ranging from photographs of individual people to trademarked company logos. We also train hundreds of diffusion models in various settings to analyze how different modeling and data decisions affect privacy. Overall, our results show that diffusion models are much less private than prior generative models such as GANs, and that mitigating these vulnerabilities may require new advances in privacy-preserving training. Read More

#chatbots, #nlp, #Diffusion