Microsoft Research proposes face-swapping AI and face forgery detector

State-of-the-art AI and machine learning algorithms can generate lifelike images of places and objects, but they’re also adept at swapping faces from one person to another — and of spotting sophisticated deepfakes. In a pair of academic papers published by teams at Microsoft Research and Peking University, researchers propose FaceShifter and Face X-Ray, a framework for high-fidelity and occlusion-aware face swapping and a representation for detecting forged face images, respectively. They say that both achieve industry-leading results compared with several baselines without sacrificing performance, and that they require substantially less data than previous approaches. Read More

#fake

AlphaZero beat humans at Chess and StarCraft, now it’s working with quantum computers

A team of researchers from Aarhus University in Denmark let DeepMind‘s AlphaZero algorithm loose on a few quantum computing optimization problems and, much to everyone’s surprise, the AI was able to solve the problems without any outside expert knowledge. Not bad for a machine learning paradigm designed to win at games like Chess and StarCraft. Read More

#quantum

An algorithm that learns through rewards may show how our brain does too

In 1951, Marvin Minsky, then a student at Harvard, borrowed observations from animal behavior to try to design an intelligent machine. Drawing on the work of physiologist Ivan Pavlov, who famously used dogs to show how animals learn through punishments and rewards, Minsky created a computer that could continuously learn through similar reinforcement to solve a virtual maze.

At the time, neuroscientists had yet to figure out the mechanisms within the brain that allow animals to learn in this way. But Minsky was still able to loosely mimic the behavior, thereby advancing artificial intelligence. Several decades later, as reinforcement learning continued to mature, it in turn helped the field of neuroscience discover those mechanisms, feeding into a virtuous cycle of advancement between the two fields. Read More

#reinforcement-learning

NTM: Neural Turing Machines

We discuss Neural Turing Machine(NTM), an architecture proposed by Graves et al. in DeepMind. NTMs are designed to solve tasks that require writing to and retrieving information from an external memory, which makes it resemble a working memory system that can be described by short-term storage(memory) of information and its rule-based manipulation. Compared with RNN structure with internal memory, NTMs utilize attentional mechanisms to efficiently read and write an external memory, which makes them a more favorable choice for capturing long-range dependencies. But, as we will see, these two are not independent of each other and can be combined to form a more powerful architecture. Read More

#deep-learning, #neural-networks