MIT can secure cloud-based AI without slowing it down

It’s rather important to secure cloud-based AI systems, especially when they they use sensitive data like photos or medical records. To date, though, that hasn’t been very practical — encrypting the data can render machine learning systems so slow as to be virtually unusable. MIT thankfully has a solution in the form of GAZELLE, a technology that promises to encrypt convolutional neural networks without a dramatic slowdown. The key was to meld two existing techniques in a way that avoids the usual bottlenecks those methods create. Read More

#cloud, #homomorphic-encryption

ML Confidential: Machine Learning on Encrypted Data

We demonstrate that, by using a recently proposed leveled homomorphic encryption scheme, it is possible to delegate the execution of a machine learning algorithm to a computing service while retaining confidentiality of the training and test data. Since the computational complexity of the homomorphic encryption scheme depends primarily on the number of levels of multiplications to be carried out on the encrypted data, we define a new class of machine learning algorithms in which the algorithm’s predictions, viewed as functions of the input data, can be expressed as polynomials of bounded degree. We pro-pose confidential algorithms for binary classification based on polynomial approximations to least-squares solutions obtained by a small number of gradient descent steps. We present experimental validation of the confidential machine learning pipeline and discuss the trade-offs regarding computational complexity, prediction accuracy and cryptographic security. Read More

#homomorphic-encryption, #machine-learning

Unsupervised Machine Learning on Encrypted Data

In the context of Fully Homomorphic Encryption, which allows computations on encrypted data, Machine Learning has been one of the most popular applications in the recent past. All of these works,however, have focused on supervised learning, where there is a labeled training set that is used to configure the model. In this work, we take thefirst step into the realm of unsupervised learning, which is an important area in Machine Learning and has many real-world applications, by ad-dressing the clustering problem. To this end, we show how to implement the K-Means-Algorithm. This algorithm poses several challenges in the FHE context, including a division, which we tackle by using a natural encoding that allows division and may be of independent interest. While this theoretically solves the problem, performance in practice is not optimal, so we then propose some changes to the clustering algorithm to make it executable under more conventional encodings. We show that our new algorithm achieves a clustering accuracy comparable to the original K-Means-Algorithm, but has less than 5% of its runtime. Read More

#homomorphic-encryption, #machine-learning

These Three Security Trends Are Key to Decentralize Artificial Intelligence

Decentralized artificial intelligence(AI) is one of the most promising trends in the AI space. The hype around decentralized AI has increased lately with the raise on popularity of blockchain technologies. While the value proposition of decentralized AI systems is very clear from a conceptual standpoint, their implementation is full of challenges. Arguably, the biggest challenges of implementing decentralized AI architectures are in the area of security and privacy.

The foundation of decentralized AI systems is an environment in which different parties such as data providers, data scientists and consumers collaborate to create, train and execute AI models without the need of a centralized authority. That type of infrastructure requires to not only establish unbiased trust between the parties but also solve a few security challenges. Let’s take a very simple scenario of a company that wants to create a series of AI models to detect patterns in their sales data. In a decentralized model, the company will publish a series of datasets to a group of data scientists that will collaborate to create different machine learning models. During that process, the data scientists will interact with other parties that will train and regularize the models. Enforcing the privacy of the data as well as the security of the communications between the different parties is essential to enable the creation of AI models in a decentralized manner. Read More

#gans, #homomorphic-encryption, #neural-networks

Chinese military to replace Windows OS amid fears of US hacking

Amidst an escalating trade war and political tensions with the US, Beijing officials have decided to develop a custom operating system that will replace the Windows OS on computers used by the Chinese military.

The decision, while not made official through the government’s normal press channels, was reported earlier this month by Canada-based military magazine Kanwa Asian Defence.

Per the magazine, Chinese military officials won’t be jumping ship from Windows to Linux but will develop a custom OS. Read More

#china

Building The Analytics Team At Wish

When I first joined Wish two and half years ago, things were going well. The Wish app had reached top positions on both iOS and Android app stores, and was selling over two million items a day.

Very few people believed that a large business could be built from selling low priced products. Using data, Wish has been able to test and challenge these assumptions. Being data driven was in the company DNA.

But from the company’s massive growth were huge growing pains on the analytics side. Every team needed urgent data support and had a lack of visibility into their ownership areas. But Wish’s analytics capabilities were still in its infancy and couldn’t keep up with the demand. Read More

#devops, #strategy

Human + Machine

Artificial Intelligence is no longer just a futuristic notion, it’s here right now, leading the 4th Industrial Revolution. And everyone is talking about AI, all the time.

We are all well aware of AI challenges 1. Data management in corporations, 2. Lack of deeply skilled AI talent, 3. Responsible AI needs (moral use, bias or security). But I find there are so many articles online about AI industry disruption that drive too much concern about malicious use of AI or even job loss paranoia.

I recently read “Human + Machine: Reimagining Work in the Age of AI” by Paul R. Daugherty and H. James Wilson, both Accenture leaders, and found their approach to machines and human collaboration and the implications in terms of human resources and new business models, refreshing and inspiring. Read More

#books, #strategy

Fuzzy Math Is Key to AI Chip That Promises Human-Like Intuition

Simon Knowles, chief technology officer of Graphcore Ltd., is smiling at a whiteboard as he maps out his vision for the future of machine learning. He uses a black marker to dot and diagram the nodes of the human brain: the parts that are “ruminative, that think deeply, that ponder.” His startup is trying to approximate these neurons and synapses in its next-generation computer processors, which the company is betting can “mechanize intelligence.”

Artificial intelligence is often thought of as complex software that mines vast datasets, but Knowles and his co-founder, Chief Executive Officer Nigel Toon, argue that more important obstacles still exist in the computers that run the software. The problem, they say, sitting in their airy offices in the British port city of Bristol, is that chips—known, depending on their function, as CPUs (central processing units) or GPUs (graphics processing units)—weren’t designed to “ponder” in any recognizably human way. Whereas human brains use intuition to simplify problems such as identifying an approaching friend, a computer might try to analyze every pixel of that person’s face, comparing it to a database of billions of images before attempting to say hello. That precision, which made sense when computers were primarily calculators, is massively inefficient for AI, burning huge quantities of energy to process all the relevant data. Read More

#human, #nvidia

What Makes a Strong Team? Using Collective Intelligence to Predict Team Performance in League of Legends

Recent research has demonstrated that (a) groups can be characterized by a collective intelligence (CI) factor that measures their ability to perform together on a wide range of different tasks, and (b) this factor can predict groups’ performance on other tasks in the future. The current study examines whether these results translate into the world of teams in competitive online video games where self-organized, time-pressured, and intense collaboration occurs purely online. In this study of teams playing the online game League of Legends, we find that CI does, indeed, predict the competitive performance of teams controlling for the amount of time played as a team. We also find that CI is positively correlated with the presence of a female team member and with the team members’ average social perceptiveness. Finally, unlike in prior studies, tacit coordination in this setting plays a larger role than verbal communication. Read More

#collective-intelligence

Neuromorphic Chips and the Future of Your Cell Phone

This article is particularly fun for me since it brings together two developments that I didn’t see coming together, real time computer vision (RTCV), and neuromorphic neural nets (aka spiking neural nets).

We’ve been following neuromorphic nets for a few years now (additional references at the end of this article) and viewed them as the next generation (3rdgeneration) of neural nets.  This was mostly in the context of the pursuit of Artificial General Intelligence (AGI) which is the holy grail (or terrifying terminator) of all we’ve been doing.

Where we got off track was in thinking that neuromorphic nets that are just in their infancy were only for AGI.  Turns out that they facilitate a lot of closer-in capabilities, and among them could be real time computer vision (RTCV).  Why that’s true turns out to have more to do with how neuromorphics are structured than what fancy things they may be able to do.  Here’s the story. Read More

#vision