In the last two years, more than 200 papers have been written on how Machine Learning (ML) can fail because of adversarial attacks on the algorithms and data; this number balloons if we were to incorporate non-adversarial failure modes. The spate of papers has made it difficult for ML practitioners, let alone engineers, lawyers and policymakers, to keep up with the attacks against and defenses of ML systems. However, as these systems become more pervasive, the need to understand how they fail, whether by the hand of an adversary or due to the inherent design of a system, will only become more pressing. The purpose of this document is to jointly tabulate both the of these failure modes in a single place.
— Intentional failures wherein the failure is caused by an active adversary attempting to subvert the system to attain her goals – either to misclassify the result, infer private training data, or to steal the underlying algorithm.
— Unintentional failures wherein the failure is because an ML system produces a formally correct but completely unsafe outcome.
Read More
Monthly Archives: December 2019
A DARPA Perspective on Artificial Intelligence

DARPA envisions a future in which machines are more than just tools that execute human-programmed rules or generalize from human-curated data sets. Rather, the machines DARPA envisions will function more as colleagues than as tools.
DARPA sees three waves of AI:
— Handcrafted Knowledge
— Statistical Learning
— Contextual Adaptation
Read More
Using Artificial Intelligence To Analyze Markets: An Interview With Ainstein AI CEO Suzanne Cook
To learn more about the use of artificial intelligence at it may be applied to analyzing stocks and markets, I asked the CEO and originator of Ainstein AI about her work in this area.
Suzanne Cook is a Wharton School graduate and a seven-time Institutional Investor All Star Analyst. Read More
Bias-Resilient Neural Network
Presence of bias and confounding effects is in arguably one of the most critical challenges in machine learning applications that has alluded to pivotal debates in the recent years. Such challenges range from spurious associations of confounding variables in medical studies to the bias of race in gender or face recognition systems. One solution is to enhance datasets and organize them such that they do not reflect biases, which is a cumbersome and intensive task. The alternative is to make use of available data and build models considering these biases. Traditional statistical methods apply straightforward techniques such as residualization or stratification to precomputed features to account for confounding variables. However, these techniques are generally not suitable for end-to-end deep learning methods. In this paper, we propose a method based on the adversarial training strategy to learn discriminative features unbiased and invariant to the confounder(s). This is enabled by incorporating a new adversarial loss function that encourages a vanished correlation between the bias and learned features. We apply our method to synthetic data, medical images, and a gender classification (Gender Shades Pilot Parliaments Benchmark) dataset. Our results show that the learned features by our method not only result in superior prediction performance but also are uncorrelated with the bias or confounder variables. The code is available at http://github.com/QingyuZhao/BR-Net/. Read More
AI Explainability (Google Whitepaper)
Systems built around AI will affect and, in many cases, redefine medical interventions, autonomous transportation, criminal justice, financial risk management and many other areas of society.However, considering the challenges , the usefulness and fairness of these AI systems will be gated by our ability to understand, explain and control them. Read More
Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess and Go, where a perfect simulator is available. However, in real-world problems the dynamics governing the environment are often complex and unknown. In this work we present the MuZero algorithm which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics.MuZero learns a model that, when applied iteratively, predicts the quantities most directly relevant to planning: the reward, the action-selection policy, and the value function. When evaluated on 57 different Atari games – the canonical video game environment for testing AI techniques, in which model-based planning approaches have historically struggled -our new algorithm achieved a new state of the art. When evaluated on Go, chess and shogi, without any knowledge of the game rules,MuZero matched the superhuman performance of the AlphaZero algorithm that was supplied with the game rules. Read More
List of Quantum Clouds (Nov 2019)
Clouds for doing quantum computing are becoming increasingly popular. Here is a list with links of those quantum clouds that already exist or are imminent. All are commercial but usually free for small jobs and open to the public. Most use open source q c software but some don’t and have opted to keep their software proprietary. In Alphabetical Order. ✅ indicates working qc hardware. Read More
Researchers develop AI that reads lips from video footage
AI and machine learning algorithms capable of reading lips from videos aren’t anything out of the ordinary, in truth. Back in 2016, researchers from Google and the University of Oxford detailed a system that could annotate video footage with 46.8% accuracy, outperforming a professional human lip-reader’s 12.4% accuracy. But even state-of-the-art systems struggle to overcome ambiguities in lip movements, preventing their performance from surpassing that of audio-based speech recognition.
In pursuit of a more performant system, researchers at Alibaba, Zhejiang University, and the Stevens Institute of Technology devised a method dubbed Lip by Speech (LIBS), which uses features extracted from speech recognizers to serve as complementary clues. They say it manages industry-leading accuracy on two benchmarks, besting the baseline by a margin of 7.66% and 2.75% in character error rate. Read More
Chinese Public AI R&D Spending: Provisional Findings
China aims to become “the world’s primary AI innovation center” by 2030.1 Toward that end, the Chinese government is spending heavily on AI research and development (R&D). This memo provides a provisional, open-source estimate of China’s spending.
We assess with low to moderate confidence that China’s public investment in AI R&D was on the order of a few billion dollars in 2018. With higher confidence, we assess that China’s government is not investing tens of billions of dollars annually in AI R&D, as some have suggested. Read More
Microsoft Is Taking Quantum Computers to the Cloud
Microsoft got where it is by ensuring that Windows ran on many different types of hardware. Monday, the company said its cloud computing platform will soon offer access to the most exotic hardware of all: quantum computers.
Microsoft is one of several tech giants investing in quantum computing, which by crunching data using strange quantum mechanical processes promises unprecedented computational power. The company is now preparing its Azure cloud computing service to offer select customers access to three prototype quantum computers, from engineering conglomerate Honeywell and two startups, IonQ, which emerged from the University of Maryland, and QCI, spun out of Yale. Read More