Consistency models (CMs) are a powerful class of diffusion-based generative models optimized for fast sampling. Most existing CMs are trained using discretized timesteps, which introduce additional hyperparameters and are prone to discretization errors. While continuous-time formulations can mitigate these issues, their success has been limited by training instability. To address this, we propose a simplified theoretical framework that unifies previous parameterizations of diffusion models and CMs, identifying the root causes of instability. Based on this analysis, we introduce key improvements in diffusion process parameterization, network architecture, and training objectives. These changes enable us to train continuous-time CMs at an unprecedented scale, reaching 1.5B parameters on ImageNet 512×512. Our proposed training algorithm, using only two sampling steps, achieves FID scores of 2.06 on CIFAR-10, 1.48 on ImageNet 64×64, and 1.88 on ImageNet 512×512, narrowing the gap in FID scores with the best existing diffusion models to within 10%. — Read More
Tag Archives: Image Recognition
Meta announces Movie Gen, an AI-powered video generator
A new AI-powered video generator from Meta produces high-definition footage complete with sound, the company announced today. The announcement comes several months after competitor OpenAI unveiled Sora, its text-to-video model — though public access to Movie Gen isn’t happening yet.
Movie Gen uses text inputs to automatically generate new videos, as well as edit existing footage or still images. The New York Times reports that the audio added to videos is also AI-generated, matching the imagery with ambient noise, sound effects, and background music. The videos can be generated in different aspect ratios. — Read More
SF3D: Stable Fast 3D Mesh Reconstruction with UV-unwrapping and Illumination Disentanglement
We present SF3D, a novel method for rapid and high-quality textured object mesh reconstruction from a single image in just 0.5 seconds. Unlike most existing approaches, SF3D is explicitly trained for mesh generation, incorporating a fast UV unwrapping technique that enables swift texture generation rather than relying on vertex colors. The method also learns to predict material parameters and normal maps to enhance the visual quality of the reconstructed 3D meshes. Furthermore, SF3D integrates a delighting step to effectively remove low-frequency illumination effects, ensuring that the reconstructed meshes can be easily used in novel illumination conditions. Experiments demonstrate the superior performance of SF3D over the existing techniques. Project page: this https URL. — Read More
Revisiting Feature Prediction for Learning Visual Representations from Video
This paper explores feature prediction as a stand-alone objective for unsupervised learning from video and introduces V-JEPA, a collection of vision models trained solely using a feature prediction objective, without the use of pretrained image encoders, text, negative examples, reconstruction, or other sources of supervision. The models are trained on 2 million videos collected from public datasets and are evaluated on downstream image and video tasks. Our results show that learning by predicting video features leads to versatile visual representations that perform well on both motion and appearance-based tasks, without adaption of the model’s parameters; e.g., using a frozen backbone. Our largest model, a ViT-H/16 trained only on videos, obtains 81.9% on Kinetics-400, 72.2% on Something-Something-v2, and 77.9% on ImageNet1K. — Read More
Kling, the AI video generator rival to Sora that’s wowing creators
If you follow any AI influencers or creators on social media, there’s a good chance you may have seen them more excited than usual lately about a new AI video generation model called “Kling.”
The videos it generates from pure text prompts and some configurable, in-app buttons and settings, look incredibly realistic, on par with OpenAI’s still non-public, invitation only, closed beta AI model Sora, which it has shared with a small group of artists and filmmakers as it tests it and its adversarial (read: risky, objectionable) uses.
[W]here did Kling come from? What does it offer? And how can you get your hands on it? Read on to find out. — Read More
UKRAINE IS RIDDLED WITH LAND MINES. DRONES AND AI CAN HELP
EARLY ON A JUNE morning in 2023, my colleagues and I drove down a bumpy dirt road north of Kyiv in Ukraine. The Ukrainian Armed Forces were conducting training exercises nearby, and mortar shells arced through the sky. We arrived at a vast field for a technology demonstration set up by the United Nations. Across the 25-hectare field—that’s about the size of 62 American football fields—the U.N. workers had scattered 50 to 100 inert mines and other ordnance. Our task was to fly our drone over the area and use our machine learning software to detect as many as possible. And we had to turn in our results within 72 hours.
The scale was daunting: The area was 10 times as large as anything we’d attempted before with our drone demining startup, Safe Pro AI. My cofounder Gabriel Steinberg and I used flight-planning software to program a drone to cover the whole area with some overlap, taking photographs the whole time. It ended up taking the drone 5 hours to complete its task, and it came away with more than 15,000 images. Then we raced back to the hotel with the data it had collected and began an all-night coding session.
We were happy to see that our custom machine learning model took only about 2 hours to crunch through all the visual data and identify potential mines and ordnance. But constructing a map for the full area that included the specific coordinates of all the detected mines in under 72 hours was simply not possible with any reasonable computational resources. The following day (which happened to coincide with the short-lived Wagner Group rebellion), we rewrote our algorithms so that our system mapped only the locations where suspected land mines were identified—a more scalable solution for our future work. — Read More
Microsoft AI creates scary real talkie videos from a single photo
Microsoft Research Asia has revealed an AI model that can generate frighteningly realistic deepfake videos from a single still image and an audio track. How will we be able to trust what we see and hear online from here on in?
… After training the [VASA-1] model on footage of around 6,000 real-life talking faces from the VoxCeleb2 dataset, the technology is able to generate scary real video where the newly animated subject is not only able to accurately lip-sync to a supplied voice audio track, but also sports varied facial expressions and natural head movements – all from a single static headshot photo. — Read More
Stability AI Announces Stable Diffusion 3: All We Know So Far
Stability AI announced an early preview of Stable Diffusion 3, their text-to-image generative AI model. Unlike last week’s Sora text-to-video announcement from OpenAI, there were limited demonstrations of the model’s new capabilities, but some details were provided. Here, we explore what the announcement means, how the new model works, and some implications for the advancement of image generation. — Read More
Latte: Latent Diffusion Transformer for Video Generation
We propose a novel Latent Diffusion Transformer, namely Latte, for video generation. Latte first extracts spatio-temporal tokens from input videos and then adopts a series of Transformer blocks to model video distribution in the latent space. In order to model a substantial number of tokens extracted from videos, four efficient variants are introduced from the perspective of decomposing the spatial and temporal dimensions of input videos. To improve the quality of generated videos, we determine the best practices of Latte through rigorous experimental analysis, including video clip patch embedding, model variants, timestep-class information injection, temporal positional embedding, and learning strategies. Our comprehensive evaluation demonstrates that Latte achieves state-of-the-art performance across four standard video generation datasets, i.e., FaceForensics, SkyTimelapse, UCF101, and Taichi-HD. In addition, we extend Latte to text-to-video generation (T2V) task, where Latte achieves comparable results compared to recent T2V models. We strongly believe that Latte provides valuable insights for future research on incorporating Transformers into diffusion models for video generation. — Read More
#nlp, #image-recognitionOpenAI introduces Sora, its text-to-video AI model
OpenAI’s latest model takes text prompts and turns them into ‘complex scenes with multiple characters, specific types of motion,’ and more.
OpenAI is launching a new video-generation model, and it’s called Sora. The AI company says Sora “can create realistic and imaginative scenes from text instructions.” The text-to-video model allows users to create photorealistic videos up to a minute long — all based on prompts they’ve written.
Sora is capable of creating “complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background,” according to OpenAI’s introductory blog post. The company also notes that the model can understand how objects “exist in the physical world,” as well as “accurately interpret props and generate compelling characters that express vibrant emotions.” — Read More