Self-Supervised GANs

If you aren’t familiar with Generative Adversarial Networks (GANs), they are a massively popular generative modeling technique formed by pitting two Deep Neural Networks, a generator and a discriminator, against each other. This adversarial loss has sparked the interest of many Deep Learning and Artificial Intelligence researchers. However, despite the beauty of the GAN formulation and the eye-opening results of the state-of-the-art architectures, GANs are generally very difficult to train. One of the best ways to get better results with GANs are to provide class labels. This is the basis of the conditional-GAN model. This article will show how Self-Supervised Learning can overcome the need for class labels for training GANs and rival the performance of conditional-GAN models.

Before we get into how Self-Supervised Learning improves GANs, we will introduce the concept of Self-Supervised Learning. Compared to the popular families of Supervised and Unsupervised Learning, Self-Supervised is most similar to Unsupervised Learning. Self-Supervised tasks include things such as image colorization, predicting the relative location of extracted patches from an image, or in this case, predicting the rotation angle of an image. These tasks are dubbed “Self-Supervised” because the data lends itself to these surrogate tasks. In this sense, the Self-Supervised tasks take the form of (X, Y) pairs, however, the X,Y pairs are automatically constructed from the dataset itself and do not require human labeling. The paper discussed in this article summarizes Self-Supervised Learning as, “one can make edits to the given image and ask the network to predict the edited part”. This is the basic idea behind Self-Supervised Learning. Read More

#gans, #neural-networks, #self-supervised