How to jam neural networks

Deep neural networks (DNNs) have been a very active field of research for eight years now, and for the last five we’ve seen a steady stream of adversarial examples – inputs that will bamboozle a DNN so that it thinks a 30mph speed limit sign is a 60 instead, and even magic spectacles to make a DNN get the wearer’s gender wrong.

So far, these attacks have targeted the integrity or confidentiality of machine-learning systems. Can we do anything about availability? Read More

#adversarial, #cyber

PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models

The primary aim of single-image super-resolution is to construct a high-resolution (HR) image from a corresponding low-resolution (LR) input. In previous approaches,which have generally been supervised, the training objective typically measures a pixel-wise average distance be-tween the super-resolved (SR) and HR images. Optimizing such metrics often leads to blurring, especially in high variance (detailed) regions. We propose an alternative formulation of the super-resolution problem based on creating realistic SR images that downscale correctly. We present a novel super-resolution algorithm addressing this problem, PULSE (Photo Up sampling via Latent Space Exploration), which generates high-resolution, realistic images at resolutions previously unseen in the literature. It accomplishes this in an entirely self-supervised fashion and is not confined to a specific degradation operator used during training, unlike previous methods (which require training on databases of LR-HR image pairs for supervised learning). Instead of starting with the LR image and slowly adding detail, PULSE traverses the high-resolution natural image manifold, searching for images that downscale to the original LR image. This is formalized through the “down-scaling loss,” which guides exploration through the latent space of a generative model. By leveraging properties of high-dimensional Gaussians, we restrict the search space to guarantee that our outputs are realistic. PULSE thereby generates super-resolved images that both are realistic and downscale correctly. We show extensive experimental results demonstrating the efficacy of our approach in the do-main of face super-resolution (also known as face hallucination). Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously possible. Read More

#image-recognition, #self-supervised

Understanding Artificial Intelligence From Intelligent Automation

In the age of digital disruption, intelligent automation is omnipresent. A heady mix of AI and RPA, intelligent automation is adapted for its sheer ease to automate rule-based tasks and unstructured data handling.  In this digital age, organizations walking on the path of change management adopt intelligent automation in a bid to outsmart their competitors.

You may wonder can these two terms be interchanged? The short answer is NO. IA and AI are two different concepts, the main point of difference being while artificial intelligence is about algorithms programmed to mimic human cognitive functions, intelligent automation takes the rule-based, highly voluminous work processes to AI-enabled RPA bots to ensure improved safety, operational efficiency, and business continuity. Read More

#artificial-intelligence, #chatbots, #robotics

Machine Learning Vs. Predictive Analytics: Which Is Better For Business?

What’s the first thing that comes to mind when you hear “artificial intelligence” (AI)? While I-Robot was a great film, it doesn’t count. Many don’t realize how deep the rabbit hole goes, either. There are dozens, if not hundreds, of subsets of AI.

They all work in their own unique way with different benefits and uses. Unfortunately, 37% of executives struggle to understand how the technologies work. This confusion naturally leads to the question: “Which one should my company use, and how do we deploy it?” Read More

#artificial-intelligence

New Government Guidelines Makes Accelerating Artificial Intelligence Possible

Prior to COVID-19, government investment in AI had surpassed billions of dollars in research and development. With a premium now placed on speed to develop vaccines and diagnostics for the coronavirus, there is a renewed emphasis on the role of AI and how governments can ensure it is used in a trusted manner.

The World Economic Forum’s Artificial Intelligence and Machine Learning team built and piloted with partners tools for governments to procure artificial intelligence solutions built with ethics in mind. The Procurement in a Box toolkit includes concrete advice for purchasing, risk assessments, proposal drafting and evaluation. Read More

#ethics

MLOps with a Feature Store

If AI is to become embedded in the DNA of Enterprise computing systems, Enterprises must first re-align their machine learning (ML) development processes to include data engineers, data scientists and ML engineers in a single automated development, integration, testing, and deployment pipeline. This blog introduces platforms and methods for continuous integration (CI), continuous delivery (CD), and continuous training (CT) with machine learning platforms, with details on how to do CI/CD machine learning operations (MLOps) with a Feature Store. We will see how the Feature Store refactors the monolithic end-to-end ML pipeline into a feature engineering and a model training pipeline. Read More

#devops

The Iguazio Data Science Platform

The Iguazio Data Science Platform (“the platform”) is a fully integrated and secure data science platform as a service (PaaS), which simplifies development, accelerates performance, facilitates collaboration, and addresses operational challenges. It  provides a complete data science workflow in a single ready-to-use platform that includes all the required building blocks for creating data science applications from research to production. Read More

#mlaas

The Promise And Risks Of Artificial Intelligence: A Brief History

Artificial intelligence (AI) has recently become a focus of efforts to maintain and enhance U.S. military, political, and economic competitiveness. The Defense Department’s 2018 strategy for AI, released not long after the creation of a new Joint Artificial Intelligence Center, proposes to accelerate the adoption of AI by fostering “a culture of experimentation and calculated risk taking,” an approach drawn from the broader National Defense Strategy. But what kinds of calculated risks might AI entail? The AI strategy has almost nothing to say about the risks incurred by the increased development and use of AI. On the contrary, the strategy proposes using AI to reduce risks, including those to “both deployed forces and civilians.” Read More

#artificial-intelligence, #dod

Brookings Institute Report: How to improve cybersecurity for artificial intelligence

This report from The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative is part of “AI Governance,” a series that identifies key governance and norm issues related to AI and proposes policy remedies to address the complex challenges associated with emerging technologies. Read More

#adversarial, #cyber

Defining The Services-As-Software Business Model For AI

My angel investment in Botkeeper has been one of the most influential in my thinking on how AI strategy is evolving. When new high impact technologies come along, they often shake up status quo business models and because no one understands what business models might emerge on the other side, it’s a wonderful time to make a few bets on some startups. As an investor, these initial bets help me learn how the space is evolving, which means when the real wave of startups comes that are embracing this new tech, I’m much more educated than most people who sat out the initial round. And on top of that, sometimes you get lucky on the early bets too. Read More

#strategy, #investing