TextStyleBrush: Transfer of text aesthetics from a single example

We present a novel approach for disentangling the content of a text image from all aspects of its appearance. The appearance representation we derive can then be applied to new content, for one-shot transfer of the source style to new content. We learn this disentanglement in a self-supervised manner. Our method processes entire word boxes, without requiring segmentation of text from background, per-character processing, or making assumptions on string lengths. We show results in different text domains which were previously handled by specialized methods, e.g., scene text, handwritten text. To these ends, we make a number of technical contributions: (1) We disentangle the style and content of a textual image into a non-parametric, fixed-dimensional vector. (2) We propose a novel approach inspired by StyleGAN but conditioned over the example style at different resolution and content. (3) We present novel self-supervised training criteria which preserve both source style and target content using a pre-trained font classifier and text recognizer. Finally, (4) we also introduce Imgur5K, a new challenging dataset for handwritten word images. We offer numerous qualitative photo-realistic results of our method. We further show that our method surpasses previous work in quantitative tests on scene text and handwriting datasets, as well as in a user study. Read More

#image-recognition, #gans

Introducing Digital Einstein

UneeQ unveiled Einstein, a digital human driven by conversational and experiential artificial intelligence (AI), on the 100th anniversary of scientist’s Nobel Prize in Physics.

Read More

Chat with him here

#gans, #videos

TransGAN: Two Transformers Can Make One Strong GAN

The recent explosive interest on transformers has suggested their potential to become powerful “universal” models for computer vision tasks,such as classification, detection, and segmentation. However, how further transformers can go- are they ready to take some more notoriously difficult vision tasks, e.g., generative adversarial networks (GANs)? Driven by that curiosity, we conduct the first pilot study in building a GAN completely free of convolutions, using only pure transformer-based architectures. Our vanilla GAN architecture, dubbed TransGAN, consists of a memory-friendly transformer-based generator that progressively increases feature resolution while decreasing embedding dimension, and a patch-level discriminator that is also transformer-based. We then demonstrate TransGAN to notably benefit from data augmentations (more than standard GANs), a multi-task co-training strategy for the generator, and a locally initialized self-attention that emphasizes the neighborhood smoothness of natural images. Equipped with those findings, TransGAN can effectively scaleup with bigger models and high-resolution image datasets. Our best architecture achieves highly competitive performance compared to current state-of-the-art GANs based on convolutional backbones. Specifically, TransGAN sets newstate-of-the-art IS score of 10.10 and FID score of 25.32 on STL-10. It also reaches competitive8.63 IS score and 11.89 FID score on CIFAR-10, and 12.23 FID score on CelebA64×64, respectively. We also conclude with a discussion of the current limitations and future potential of TransGAN. Read More

#gans

Artificial Intelligence Creates Better Art Than You (Sometimes)

People around the world are using intelligent machines to create new forms of art.

In 2018, in late October, a distinctly odd painting appeared at the fine art auction house Christe’s. At a distance, the painting looks like a 19th-century portrait of an austere gentleman dressed in black. …Our painter is a machine — an intelligent machine. Though the initial estimates had the portrait selling under $10,000, the painting would go on to sell for an incredible $432,500. The portrait was not created by an inspired human mind but was generated by artificial intelligence in the form of Generative Adversarial Networks or GAN. Read More

#gans

Generative Adversarial Transformers

We introduce the GANsformer, a novel and efficient type of transformer, and explore it for the task of visual generative modeling. The network employs a bipartite structure that enables long-range interactions across the image, while maintaining computation of linearly efficiency, that can readily scale to high-resolution synthesis. It iteratively propagates information from a set of latent variables to the evolving visual features and vice versa, to support the refinement of each in light of the other and encourage the emergence of compositional representations of objects and scenes. In contrast to the classic transformer architecture, it utilizes multiplicative integration that allows flexible region-based modulation, and can thus be seen as a generalization of the successful StyleGAN network. We demonstrate the model’s strength and robustness through a careful evaluation over a range of datasets, from simulated multi-object environments to rich real-world indoor and out-door scenes, showing it achieves state-of-the-art results in terms of image quality and diversity, while enjoying fast learning and better data-efficiency. Further qualitative and quantitative experiments offer us an insight into the model’s inner workings, revealing improved interpretability and stronger disentanglement, and illustrating the benefits and efficacy of our approach. Read More

#gans

Why A.I. knows who you find attractive better than you do

When it comes to earning social currency, being attractive is as good as gold.

A team of scientists from Finland has now designed a machine learning algorithm that can plumb the depths of these subjective judgments better than we can and can accurately predict who we find attractive via our unique brainwaves — and even generate a unique portrait that captures these qualities — with 83 percent accuracy.

Far beyond just the laws of attraction, this novel brain-computer interface (BCI) could push wide-open a new era of BCI that can bring our unvoiced desires to life.

The research was published this February in the journal IEEE Transactions on Affective Computing. Read More

#gans, #human

I Dream My Painting and I Paint My Dream

Dutch photographer Bas Uterwijk used artificial intelligence to create a realistic portrait of Vincent van Gogh on van Gogh’s 168th birthday.

#gans, #image-recognition

Face editing with Generative Adversarial Networks

Read More

#gans, #videos

Adaptive Discriminator Augmentation: GAN Training Breakthrough for Limited Data Applications

Read More

#gans, #image-recognition

GANs with Keras and TensorFlow

… Generative Adversarial Networks were first introduced by Goodfellow et al. in their 2014 paper, Generative Adversarial Networks. These networks can be used to generate synthetic (i.e., fake) images that are perceptually near identical to their ground-truth authentic originals.

In this tutorial you will learn how to implement Generative Adversarial Networks (GANs) using Keras and TensorFlow. Read More

#gans, #python