ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders

Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data. — Read More

#image-recognition

Watch out, Midjourney! Ideogram launches AI image generator with impressive typography

Earlier this week, a new generative AI image startup called Ideogram, founded by former Google Brain researchers, launched with $16.5 million in seed funding led by a16z and Index Ventures.

Another image generator? Don’t we have enough to choose from between MidjourneyOpenAI’s Dall-E 2, and Stability AI’s Stable Diffusion? Well, Ideogram has a major selling point, as it may have finally solved a problem plaguing most other popular AI image generators to date: reliable text generation within the image, such as lettering on signs and for company logos. — Read More

#image-recognition

FlexiViT: One Model for All Patch Sizes

Vision Transformers convert images to sequences by slicing them into patches. The size of these patches controls a speed/accuracy tradeoff, with smaller patches leading to higher accuracy at greater computational cost, but changing the patch size typically requires retraining the model. In this paper, we demonstrate that simply randomizing the patch size at training time leads to a single set of weights that performs well across a wide range of patch sizes, making it possible to tailor the model to different compute budgets at deployment time. We extensively evaluate the resulting model, which we call FlexiViT, on a wide range of tasks, including classification, image-text retrieval, open-world detection, panoptic segmentation, and semantic segmentation, concluding that it usually matches, and sometimes outperforms, standard ViT models trained at a single patch size in an otherwise identical setup. Hence, FlexiViT training is a simple drop-in improvement for ViT that makes it easy to add compute-adaptive capabilities to most models relying on a ViT backbone architecture. Code and pre-trained models are available at this https URLRead More

#image-recognition, #training

Introducing IDEFICS: An Open Reproduction of State-of-the-Art Visual Language Model

We are excited to release IDEFICS (Image-aware Decoder Enhanced à la Flamingo with Interleaved Cross-attentionS), an open-access visual language model. IDEFICS is based on Flamingo, a state-of-the-art visual language model initially developed by DeepMind, which has not been released publicly. Similarly to GPT-4, the model accepts arbitrary sequences of image and text inputs and produces text outputs. IDEFICS is built solely on publicly available data and models (LLaMA v1 and OpenCLIP) and comes in two variants—the base version and the instructed version. Each variant is available at the 9 billion and 80 billion parameter sizes. — Read More

#image-recognition, #nlp

Stable Diffusion -XL 1.0-base

SDXL consists of an ensemble of experts pipeline for latent diffusion: In a first step, the base model is used to generate (noisy) latents, which are then further processed with a refinement model (available here: https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/) specialized for the final denoising steps. Note that the base model can be used as a standalone module.

Alternatively, we can use a two-stage pipeline as follows: First, the base model is used to generate latents of the desired output size. In the second step, we use a specialized high-resolution model and apply a technique called SDEdit (https://arxiv.org/abs/2108.01073, also known as “img2img”) to the latents generated in the first step, using the same prompt. This technique is slightly slower than the first one, as it requires more function evaluations. — Read More

Source code is available at https://github.com/Stability-AI/generative-models .

#image-recognition

Google’s NEW TAPIR AI Features Have Everyone SHOCKED! 

Read More

 Read the Paper 
#big7, #image-recognition, #videos

Meta claims its new art-generating model is best-in-class

… Today, Meta announced CM3Leon (“chameleon” in clumsy leetspeak), an AI model that the company claims achieves state-of-the-art performance for text-to-image generation. CM3Leon is also distinguished by being one of the first image generators capable of generating captions for images, laying the groundwork for more capable image-understanding models going forward, Meta says.

“With CM3Leon’s capabilities, image generation tools can produce more coherent imagery that better follows the input prompts,” Meta wrote in a blog post shared with TechCrunch earlier this week. “We believe CM3Leon’s strong performance across a variety of tasks is a step toward higher-fidelity image generation and understanding.” — Read More

#image-recognition, #big7

StyleDrop: Text-To-Image Generation in Any Style

We present StyleDrop that enables the generation of images that faithfully follow a specific style, powered by Muse, a text-to-image generative vision transformer. StyleDrop is extremely versatile and captures nuances and details of a user-provided style, such as color schemes, shading, design patterns, and local and global effects. StyleDrop works by efficiently learning a new style by fine-tuning very few trainable parameters (less than 1% of total model parameters), and improving the quality via iterative training with either human or automated feedback. Better yet, StyleDrop is able to deliver impressive results even when the user supplies only a single image specifying the desired style. An extensive study shows that, for the task of style tuning text-to-image models, Styledrop on Muse convincingly outperforms other methods, including DreamBooth and Textual Inversion on Imagen or Stable Diffusion. — Read More

#big7, #image-recognition

Paragraphica – Context to image (AI) camera

Created by Bjørn KarmannParagraphica is a camera that utilizes location data and AI to visualize a “photo” of a specific place and moment. The camera exists both as a physical prototype and an online camera that you can try.  — Read More

#image-recognition

This AI-Powered, Point-Based Photo Manipulation System is Wild

Researchers have developed a point-based image manipulation system that uses generative artificial intelligence (AI) technology to allow users to precisely control the pose, shape, expression, and layout of objects.

The research outlines how users can control generative adversarial networks (GANs) with intuitive graphical control. The technology is called DragGAN. — Read More

#image-recognition