Deep Generative Design: Integration of Topology Optimization and Generative Models

Deep learning has recently been applied to various research areas of design optimization. This study presents the need and effectiveness of adopting deep learning for generative design (or design exploration) research area. This work proposes an artificial intelligent (AI)-based deep generative design framework that is capable of generating numerous design options which are not only aesthetic but also optimized for engineering performance. The proposed framework integrates topology optimization and generative models (e.g., generative adversarial networks (GANs))in an iterative manner to explore new design options,thus generating a large number of designs starting from limited previous design data. In addition, anomaly detection can evaluate the novelty of generated designs, thus helping designers choose among design options. The 2D wheel design problem is applied as a case study for validation of the proposed framework. The framework manifests better aesthetics, diversity, and robustness of generated designs than previous generative design methods. Read More

#gans

An AI bot has figured out how to draw like Banksy. And it’s uncanny

GANksy aims to produce images that bear resemblance to works by the UK’s most famous street artist

With Banksy’s market hotter than ever, hopeful collectors might be keen to discover what appears to be a newly released collection of 265 works by the anonymous street artist. Except they are not.

Rather, they are the creation of a new artificial intelligence (AI) software named GANksy, which has been programmed to create works that attempt to mimic those of “a certain street artist”.  Read More

GANsky’s 00111111: warrior (2020) © VoleWTF
#gans

Understanding the Role of Individual Units in a Deep Neural Network

Deep neural networks excel at finding hierarchical representations that solve complex tasks over large data sets. How can we humans understand these learned representations? In this work, we present network dissection, an analytic framework to systematically identify the semantics of individual hidden units within image classification and image generation networks. First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts. We find evidence that the network has learned many object classes that play crucial roles in classifying scene classes. Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes. By analyzing changes made when small sets of units are activated or deactivated, we find that objects can be added and removed from the output scenes while adapting to the context. Finally, we apply our analytic framework to understanding adversarial attacks and to semantic image editing. Read More

#gans, #neural-networks

Training Generative Adversarial Networks with Limited Data

Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes. The approach does not require changes to loss functions or network architectures, and is applicable both when training from scratch and when fine-tuning an existing GAN on another dataset. We demonstrate, on several datasets, that good results are now possible using only a few thousand training images, often matching StyleGAN2 results with an order of magnitude fewer images. We expect this to open up new application domains for GANs. We also find that the widely used CIFAR-10 is, in fact, a limited data benchmark, and improve the record FID from 5.59 to 2.42. Read More

#gans

These weird, unsettling photos show that AI is getting smarter

Models are learning how to generate images from captions, a sign that they’re getting better at understanding our world.

Of all the AI models in the world, OpenAI’s GPT-3 has most captured the public’s imagination. It can spew poems, short stories, and songs with little prompting, and has been demonstrated to fool people into thinking its outputs were written by a human. But its eloquence is more of a parlor trick, not to be confused with real intelligence.

Nonetheless, researchers believe that the techniques used to create GPT-3 could contain the secret to more advanced AI. … Now new research from the Allen Institute for Artificial Intelligence, AI2, has taken this idea to the next level. The researchers have developed a new text-and-image model, otherwise known as a visual-language model, that can generate images given a caption. Read More

#gans, #nlp

Learning to Cartoonize Using White-box Cartoon Representations

This paper presents an approach for image cartoonization. By observing the cartoon painting behavior and consulting artists, we propose to separately identify three white-box representations from images: the surface representation that contains a smooth surface of cartoon images, the structure representation that refers to the sparse color-blocks and flatten global content in the celluloid style workflow, and the texture representation that reflects high-frequency texture, contours, and details in cartoon images. A Generative Adversarial Network (GAN) framework is used to learn the extracted representations and to cartoonize images.

The learning objectives of our method are separately based on each extracted representations, making our framework controllable and adjustable. This enables our approach to meet artists’ requirements in different styles and diverse use cases. Qualitative comparisons and quantitative analyses, as well as user studies, have been conducted to validate the effectiveness of this approach, and our method outperforms previous methods in all comparisons. Finally, the ablation study demonstrates the influence of each component in our framework. Read More

#gans, #image-recognition

Implicit Neural Representations with Periodic Activation Functions

Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations. However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal’s spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations. We propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks,dubbed sinusoidal representation networks or SIRENs, are ideally suited for representing complex natural signals and their derivatives. We analyze SIREN activation statistics to propose a principled initialization scheme and demonstrate the representation of images, wavefields, video, sound, and their derivatives. Further, we show how SIRENs can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations (yielding signed distance functions), the Poisson equation, and the Helmholtz and wave equations. Lastly, we combine SIRENs with hypernetworks to learn priors over the space of SIRENf unctions. Please see theproject website for a video overview of the proposed method and all applications. Read More

#gans, #neural-networks

NASA’s New Moon-Bound Space Suits Will Get a Boost From AI

A few months ago, NASA unveiled its next-generation space suit that will be worn by astronauts when they return to the moon in 2024 as part of the agency’s plan to establish a permanent human presence on the lunar surface. The Extravehicular Mobility Unit—or xEMU—is NASA’s first major upgrade to its space suit in nearly 40 years and is designed to make life easier for astronauts who will spend a lot of time kicking up moon dust. It will allow them to bend and stretch in ways they couldn’t before, easily don and doff the suit, swap out components for a better fit, and go months without making a repair.

But the biggest improvements weren’t on display at the suit’s unveiling last fall. Instead, they’re hidden away in the xEMU’s portable life-support system, the astro backpack that turns the space suit from a bulky piece of fabric into a personal spacecraft. It handles the space suit’s power, communications, oxygen supply, and temperature regulation so that astronauts can focus on important tasks like building launch pads out of pee concrete. And for the first time ever, some of the components in an astronaut life-support system will be designed by artificial intelligence. Read More

#gans

This Image of a White Barack Obama Is AI’s Racial Bias Problem In a Nutshell

A pixelated image of Barack Obama upsampled to the image of a white man has sparked another discussion on racial bias in artificial intelligence and machine learning. Read More

#bias, #gans

DeepMind Made A Superhuman AI For 57 Atari Games!

Read More

#gans, #videos