A new neural network could help computers code themselves

Computer programming has never been easy. The first coders wrote programs out by hand, scrawling symbols onto graph paper before converting them into large stacks of punched cards that could be processed by the computer. One mark out of place and the whole thing might have to be redone.

Nowadays coders use an array of powerful tools that automate much of the job, from catching errors as you type to testing the code before it’s deployed. But in other ways, little has changed. That’s why some people think we should just get machines to program themselves.

Justin Gottschlich, director of the machine programming research group at Intel, and his colleagues call this machine programming. Read More

#devops, #nlp

Neuroevolution of Self-Interpretable Agents

Inattentional blindness is the psychological phenomenon that causes one to miss things in plain sight. It is a consequence of the selective attention in perception that lets us remain focused on important parts of our world without distraction from irrelevant details. Motivated by selective attention, we study the properties of artificial agents that perceive the world through the lens of a self-attention bottleneck. By constraining access to only a small fraction of the visual input, we show that their policies are directly interpretable in pixel space. We find neuroevolution ideal for training self-attention architectures for vision-based reinforcement learning (RL) tasks,allowing us to incorporate modules that can include discrete, non-differentiable operations which are useful for our agent. We argue that self-attention has similar properties as indirect encoding, in the sense that large implicit weight matrices are generated from a small number of key-query parameters, thus enabling our agent to solve challenging vision based tasks with at least 1000x fewer parameters than existing methods. Since our agent attends to only task critical visual hints, they are able to generalize to environments where task irrelevant elements are modified while conventional methods fail. Read More

#image-recognition, #reinforcement-learning, #vision

NIST Launches Investigation of Face Masks’ Effect on Face Recognition Software

Algorithms created before the pandemic generally perform less accurately with digitally masked faces.

Now that so many of us are covering our faces to help reduce the spread of COVID-19, how well do face recognition algorithms identify people wearing masks? The answer, according to a preliminary study by the National Institute of Standards and Technology (NIST), is with great difficulty. Even the best of the 89 commercial facial recognition algorithms tested had error rates between 5% and 50% in matching digitally applied face masks with photos of the same person without a mask.

The results were published today as a NIST Interagency Report (NISTIR 8311), the first in a planned series from NIST’s Face Recognition Vendor Test (FRVT) program on the performance of face recognition algorithms on faces partially covered by protective masks.  Read More

#image-recognition

How to Build a Machine Learning Model

A Visual Guide to Learning Data Science

Learning data science may seem intimidating but it doesn’t have to be that way. Let’s make learning data science fun and easy. So the challenge is how do we exactly make learning data science both fun and easy?

Cartoons are fun and since “a picture is worth a thousand words”, so why not make a cartoon about data science? With that goal in mind, I’ve set out to doodle on my iPad the elements that are required for building a machine learning model. Read More

#data-science, #machine-learning

The Role of AI in IoT

This article takes a look at the role of AI in IoT and the benefits of integrating the two. Blending them provides a powerful technology that can make human lives easier, businesses more effective, and provide ways to manage at global scale. Read More

#iot

NIST selects algorithms to form a post-quantum cryptography standard

The race to protect sensitive electronic information against the threat of quantum computers has entered the home stretch.

After spending more than three years examining new approaches to encryption and data protection that could defeat an assault from a quantum computer, the National Institute of Standards and Technology (NIST) has winnowed the 69 submissions it initially received down to a final group of 15. Read More

#cyber, #quantum

Ernst & Young’s (EY) bridging AI’s trust gaps

The rapid development of artificial intelligence (AI) is raising urgent questions about ethical and consumer protection issues — from potential bias in algorithmic recruiting decisions to the privacy implications of health monitoring applications.

This survey finds that policymakers have a clear vision of AI ethical risks — and are moving to implementation, while, in contrast, a much weaker consensus exists among companies. Read More

#ethics, #trust

Major DevOps Challenges and How to Address Them

The genesis of DevOps comes from the need to break down the silos and get better ownership of the delivered product and better collaboration across teams. It entails two major components of the business space – Development and Operations.

Typically, DevOps is the practice of the development and operations teams working together from the start of the software development lifecycle (SDLC) and through deployment and operations.

…Whether it is aligning the goals and priorities to promote cross-functional team collaboration or shifting older infrastructure models, DevOps poses certain challenges to enterprises. Read More

#devops

Learning to Cartoonize Using White-box Cartoon Representations

This paper presents an approach for image cartoonization. By observing the cartoon painting behavior and consulting artists, we propose to separately identify three white-box representations from images: the surface representation that contains a smooth surface of cartoon images, the structure representation that refers to the sparse color-blocks and flatten global content in the celluloid style workflow, and the texture representation that reflects high-frequency texture, contours, and details in cartoon images. A Generative Adversarial Network (GAN) framework is used to learn the extracted representations and to cartoonize images.

The learning objectives of our method are separately based on each extracted representations, making our framework controllable and adjustable. This enables our approach to meet artists’ requirements in different styles and diverse use cases. Qualitative comparisons and quantitative analyses, as well as user studies, have been conducted to validate the effectiveness of this approach, and our method outperforms previous methods in all comparisons. Finally, the ablation study demonstrates the influence of each component in our framework. Read More

#gans, #image-recognition

From virtual Lolitas to extreme sex, deepfake porn is blurring the lines of consent and reality

Exploring the dark, liberating, and potentially catastrophic future of technology’s freakiest frontier.

…In this time of creeping incertitude and simmering distrust of news, the potential power of convincing, well-wrought, virtually undetectable deepfakes rightly raises a shuddering horror. Read More

#fake