The great power nations that master the use of artificial intelligence are likely to gain a tremendous military and economic benefits from the technology.
The United States benefitted greatly from a relatively fast adoption of the internet, and many of its most powerful companies today are the global giants of the internet age.
When it comes to its technological and economic future, the US generally believes:
— The USA’s prosperity and relative technological and economic prominence is guaranteed no matter what
— The most powerful nations in the world will democracies, where free speech and elected officials will (albeit not perfectly) enact the will of the people
— Bumbling forward with the same model for academic and private-sector innovation will still be able to keep the USA ahead of competitors in technological development
I believe these to be fatal assumptions. Read More
Monthly Archives: June 2019
Automated Speech Generation from UN General Assembly Statements: Mapping Risks in AI Generated Texts
Automated text generation has been applied broadly in many domains such as marketing and robotics, and used to create chatbots, product reviews and write poetry. The ability to synthesize text, however, presents many potential risks, while access to the technology required to build generative models is becoming increasingly easy. This work is aligned with the efforts of the United Nations and other civil society organisations to highlight potential political and societal risks arising through the malicious use of text generation software, and their potential impact on human rights. As a case study, we present the findings of an experiment to generate remarks in the style of political leaders by fine-tuning a pretrained AWD-LSTM model on a dataset of speeches made at the UN General Assembly. This work highlights the ease with which this can be accomplished, as well as the threats of combining these techniques with other technologies. Read More
Decentralizing Privacy: Using Blockchain to Protect Personal Data
The recent increase in reported incidents of surveillance and security breaches compromising users’ privacy call into question the current model, in which third-parties collect and control massive amounts of personal data. Bitcoin has demonstrated in the financial space that trusted, auditable computing is possible using a decentralized network of peers accompanied by a public ledger. In this paper, we describe a decentralized personal data management system that ensures users own and control their data. We implement a protocol that turns a blockchain into an automated access-control manager that does not require trust in a third party. Unlike Bitcoin, transactions in our system are not strictly financial – they are used to carry instructions, such as storing, querying and sharing data. Finally, we discuss possible future extensions to blockchains that could harness them into a well-rounded solution for trusted computing problems in society. Read More
How A.I. Could Be Weaponized to Spread Disinformation
In 2017, an online disinformation campaign spread against the “White Helmets,” claiming that the group of aid volunteers was serving as an arm of Western governments to sow unrest in Syria.
This false information was convincing. But the Russian organization behind the campaign ultimately gave itself away because it repeated the same text across many different fake news sites.
Now, researchers at the world’s top artificial intelligence labs are honing technology that can mimic how humans write, which could potentially help disinformation campaigns go undetected by generating huge amounts of subtly different messages. Read More
Text-based Editing of Talking-head Video
Editing talking-head video to change the speech content or to remove filler words is challenging. We propose a novel method to edit talking-head video based on its transcript to produce a realistic output video in which the dialogue of the speaker has been modified, while maintaining a seamless audio-visual flow (i.e. no jump cuts). Our method automatically annotates an input talking-head video with phonemes, visemes, 3D face pose and geometry, reflectance, expression and scene illumination per frame. To edit a video, the user has to only edit the transcript, and an optimization strategy then chooses segments of the input corpus as base material. The annotated parameters corresponding to the selected segments are seamlessly stitched together and used to produce an intermediate video representation in which the lower half of the face is rendered with a parametric face model. Finally,a recurrent video generation network transforms this representation to a photo realistic video that matches the edited transcript. We demonstrate a large variety of edits, such as the addition, removal, and alteration of words,as well as convincing language translation and full sentence synthesis. Read More
Closing Keynote | AIDC 2018 | Andrew NG, CEO
Confidentiality and Integrity with Untrusted Hosts
Several security-typed languages have recently been proposed to enforce security properties such as confidentiality or integrity by type checking. We propose a new security-typed language, SPL@, that addresses two important limitations of previous approaches.First, existing languages assume that the underlying execution platform is trusted; this assumption does not scale to distributed computation in which a variety of differently trusted hosts are available to execute programs. Our new approach,secure program partitioning, translates programs written assuming complete trust in a single executing host into programs that execute using a collection of variously trusted hosts to perform computation. As the trust configuration of a distributed system evolves, this translation can be performed as necessary for security.Second, many common program transformations do not work in existing security-typed languages;although they produce equivalent programs, these programs are rejected because of apparent information flows. SPL@ uses a novel mechanism based on ordered linear continuations to permit a richer class of program transformations, including secure program partitioning.This report is the technical companion to [ZM00]. It contains expanded discussion and extensive proofs of both the soundness and noninterference theorems mentioned in Section 3.3 of that work. Read More
Untrusted Hosts and Confidentiality: Secure Program Partitioning
This paper presents secure program partitioning, a language-based technique for protecting confidential data during computation in distributed systems containing mutually untrusted hosts. Confidentiality and integrity policies can be expressed by annotating programs with security types that constrain information flow; these programs can then be partitioned automatically to run securely on heterogeneously trusted hosts. The resulting communicating sub-programs collectively implement the original program, yet the system as a whole satisfies the security requirements of participating principals without requiring a universally trusted host machine.The experience in applying this methodology and the performance of the resulting distributed code suggest that this is a promising way to obtain secure distributed computation. Read More
More efficient security for cloud-based machine learning
A novel encryption method devised by MIT researchers secures data used in online neural networks, without dramatically slowing their runtimes. This approach holds promise for using cloud-based neural networks for medical-image analysis and other applications that use sensitive data.
Outsourcing machine learning is a rising trend in industry. Major tech firms have launched cloud platforms that conduct computation-heavy tasks, such as, say, running data through a convolutional neural network (CNN) for image classification. Resource-strapped small businesses and other users can upload data to those services for a fee and get back results in several hours.
But what if there are leaks of private data? In recent years, researchers have explored various secure-computation techniques to protect such sensitive data. But those methods have performance drawbacks that make neural network evaluation (testing and validating) sluggish — sometimes as much as million times slower — limiting their wider adoption. Read More
GAZELLE: A Low Latency Framework for Secure Neural Network Inference
The growing popularity of cloud-based machine learning raises natural questions about the privacy guarantees that can be provided in such settings. Our work tackles this problem in the context of prediction-as-a-service wherein a server has a convolutional neural network (CNN) trained on its private data and wishes to provide classifications on clients’ private images. Our goal is to build efficient secure computation protocols which allow a client to obtain the classification result without revealing their input to the server, while at the same preserving the privacy of the server’s neural network.
To this end, we design Gazelle, a scalable and low-latency system for secure neural network inference, using an intricate combination of homomorphic encryption and traditional two-party computation techniques (such as gar-bled circuits). Gazelle makes three contributions. First, we design a homomorphic encryption library which provides fast implementations of basic homomorphic operations such as SIMD (single instruction multiple data) addition,SIMD multiplication and ciphertext slot permutation. Second, we implement homomorphic linear algebra kernels which provide fast algorithms that map neural network layers to optimized homomorphic matrix-vector multiplication and convolution routines. Third, we design optimized encryption switching protocols which seamlessly convert between homomorphic and garbled circuit encodings to en-able implementation of complete neural network inference.We evaluate our protocols on benchmark neural net-works trained on the MNIST and CIFAR-10 datasets and show that Gazelle outperforms the best existing systems such as MiniONN (ACM CCS 2017) and Chameleon(Crypto Eprint 2017/1164) by20–30×in online runtime.When compared with fully homomorphic approaches like CryptoNets (ICML 2016), we demonstrate three orders of magnitude faster online run-time. Read More