A few days ago OpenAI released 2 impressive models CLIP and DALL-E. While DALL-E is able to generate text from images, CLIP classifies a very wide range of images by turning image classification into a text similarity problem. The issue with current image classification networks is that they are trained on a fixed number of categories, CLIP doesn’t work this way, it learns directly from the raw text about images, and thus it isn’t limited by labels and supervision. This is quite impressive, CLIP can classify images with state of the art accuracy without any dataset-specific training. Read More