We ask whether neural networks can learn to use secret keys to protect information from other neural networks. Specifically, we focus on ensuring confidentiality properties in a multiagent system, and we specify those properties in terms of an adversary. Thus, a system may consist of neural networks named Alice and Bob,and we aim to limit what a third neural network named Eve learns from eavesdrop-ping on the communication between Alice and Bob. We do not prescribe specific cryptographic algorithms to these neural networks; instead, we train end-to-end, adversarially. We demonstrate that the neural networks can learn how to perform forms of encryption and decryption, and also how to apply these operations selectively in order to meet confidentiality goals. Read More
#adversarial, #homomorphic-encryptionMonthly Archives: February 2020
Python Crash Course for Machine Learning and Data Science
Why your brain is not a computer
For decades it has been the dominant metaphor in neuroscience. But could this idea have been leading us astray all along?
We are living through one of the greatest of scientific endeavours – the attempt to understand the most complex object in the universe, the brain. Scientists are accumulating vast amounts of data about structure and function in a huge array of brains, from the tiniest to our own. Tens of thousands of researchers are devoting massive amounts of time and energy to thinking about what brains do, and astonishing new technology is enabling us to both describe and manipulate that activity. Read More
Enigma: Decentralized Computation Platform with Guaranteed Privacy
A peer-to-peer network, enabling different parties to jointly store and run computations on data while keeping the data completely private. Enigma’s computational model is based on a highly optimized version of secure multi-party computation,guaranteed by a verifiable secret-sharing scheme. For storage, we use a modified distributed hash table for holding secret-shared data. An external blockchain is utilized as the controller of the network, manages access control, identities and serves as a tamper-proof log of events. Security deposits and fees incentivize operation, correctness and fairness of the system. Similar to Bitcoin, Enigma removes the need for a trusted third party, enabling autonomous control of personal data.For the first time, users are able to share their data with cryptographic guarantees regarding their privacy. Read More
Engineers Just Built an Impressively Stable Quantum Silicon Chip From Artificial Atoms
Newly created artificial atoms on a silicon chip could become the new basis for quantum computing.
Engineers in Australia have found a way to make these artificial atoms more stable, which in turn could produce more consistent quantum bits, or qubits – the basic units of information in a quantum system. Read More
FreeLB: Enhanced Adversarial Training for Natural Language Understanding
Adversarial training, which minimizes the maximal risk for label-preserving input perturbations, has proved to be effective for improving the generalization of language models. In this work, we propose a novel adversarial training algorithm, FreeLB, that promotes higher invariance in the embedding space, by adding adversarial perturbations to word embeddings and minimizing the resultant adversarial risk inside different regions around input samples. To validate the effectiveness of the proposed approach, we apply it to Transformer-based models for natural language understanding and commonsense reasoning tasks. Experiments on the GLUE benchmark show that when applied only to the finetuning stage, it is able to improve the overall test scores of BERT-base model from 78.3 to 79.4, and RoBERTa-large model from 88.5 to 88.8. In addition, the proposed approach achieves state-of-the-art single-model test accuracies of 85.44% and 67.75% on ARC-Easy and ARC-Challenge. Experiments on CommonsenseQA benchmark further demonstrate that FreeLB can be generalized and boost the performance of RoBERTa-large model on other tasks as well Read More.
#adversarial, #nlpReliable Fidelity and Diversity Metrics for Generative Models
Devising indicative evaluation metrics for the image generation task remains an open problem. The most widely used metric for measuring the similarity between real and generated images has been the Frechet Inception Distance (FID) score. Because it does not differentiate the fidelity and diversity aspects of the generated images, recent papers have introduced variants of precision and recall metrics to diagnose those properties separately. In this paper, we show that even the latest version of the precision and recall metrics are not reliable yet. For example, they fail to detect the match between two identical distributions, they are not robust against outliers, and the evaluation hyperparameters are selected arbitrarily. We propose density and coverage metrics that solve the above issues. We analytically and experimentally show that density and coverage provide more interpretable and reliable signals for practitioners than the existing metrics. Read More
#gansAttacking Artificial Intelligence: AI’s Security Vulnerability and What Policymakers Can Do About It
The methods underpinning the state-of-the-art artificial intelligence systems are systematically vulnerable to a new type of cybersecurity attack called an “artificial intelligence attack.” Using this attack, adversaries can manipulate these systems in order to alter their behavior to serve a malicious end goal. As artificial intelligence systems are further integrated into critical components of society, these artificial intelligence attacks represent an emerging and systematic vulnerability with the potential to have significant effects on the security of the country.
Unlike traditional cyberattacks that are caused by “bugs” or human mistakes in code, AI attacks are enabled by inherent limitations in the underlying AI algorithms that currently cannot be fixed. Further, AI attacks fundamentally expand the set of entities that can be used to execute cyberattacks. For the first time, physical objects can be now used for cyberattacks (e.g., an AI attack can transform a stop sign into a green light in the eyes of a self-driving car by simply placing a few pieces of tape on the stop sign itself). Data can also be weaponized in new ways using these attacks, requiring changes in the way data is collected, stored, and used. Read More
Defeated Chess Champ Garry Kasparov Has Made Peace With AI
Garry Kasparov is perhaps the greatest chess player in history. For almost two decades after becoming world champion in 1985, he dominated the game with a ferocious style of play and an equally ferocious swagger.
Outside the chess world, however, Kasparov is best known for losing to a machine. In 1997, at the height of his powers, Kasparov was crushed and cowed by an IBM supercomputer called Deep Blue. The loss sent shock waves across the world, and seemed to herald a new era of machine mastery over man. Read More
The End of Agile? Not a Chance.
There’s been a fair amount of opining lately about the end of Agile, the 19-year-old movement that began in software development and has made its way through the workforce as an alternative to more traditional ways of working. People seem to be worried that a strategy that once was considered, lean, mean, and productive, has now become cultish, bloated, and ineffectual. But Agile continues to work, and it continues to work well — when implemented in a disciplined way. Read More