Putting neural networks under the microscope

Researchers from MIT and the Qatar Computing Research Institute (QCRI) are putting the machine-learning systems known as neural networks under the microscope.

In a study that sheds light on how these systems manage to translate text from one language to another, the researchers developed a method that pinpoints individual nodes, or “neurons,” in the networks that capture specific linguistic features.

Neural networks learn to perform computational tasks by processing huge sets of training data. In machine translation, a network crunches language data annotated by humans, and presumably “learns” linguistic features, such as word morphology, sentence structure, and word meaning. Given new text, these networks match these learned features from one language to another, and produce a translation. Read More

#explainability, #neural-networks

From deepfake to "cheap fake," it's getting harder than ever to tell what's true on your favorite apps and websites

In early 2018 a video that appeared to feature former President Obama discussing the dangers of fake news went viral. The clip, created by comedian Jordan Peele, foreshadowed challenges that have now become all too real. These days, tech firms, media companies and consumers are all routinely forced to make determinations about whether content is authentic or fake — and it’s increasingly hard to tell the difference.

Deepfakes are videos and images that have been digitally manipulated to depict people saying and doing things that never happened. Most deepfakes use artificial intelligence to alter video and to generate authentic-sounding audio. These clips are often produced to fool viewers, and are optimized to spread rapidly on social media.

Examples of deepfake content are popping up more frequently. In May, AI startup Dessa created a video that mimicked the voice of YouTube star Joe Rogan. A few weeks later, a video that purported to show Nancy Pelosi slurring her speech went viral on social media. And this week, a video featuring Facebook CEO Mark Zuckerberg speaking like a James Bond villain racked up millions of views. Read More

#fake

ML4ALL: Machine Learning Conference for Software Engineers

Read More

#videos

The Dawn of Robot Surveillance

Imagine a surveillance camera in a typical convenience store in the 1980s. That camera was big and expensive, and connected by a wire running through the wall to a VCR sitting in a back room. There have been significant advances in camera technology in the ensuing decades — in resolution, digitization, storage, and wireless transmission — and cameras have become cheaper and far more prevalent. Still, for all those advances, the social implications of being recorded have not changed: when we walk into a store, we generally expect that the presence of cameras won’t affect us. We expect that our movements will be recorded, and we might feel self-conscious if we notice a camera, especially if we’re doing anything that we feel might attract attention. But unless something dramatic occurs, we generally understand that the videos in which we appear are unlikely to be scrutinized or monitored.

All that is about to change. Read More

#surveillance

A spy reportedly used an AI-generated profile picture to connect with sources on LinkedIn

Over the past few years, the rise of AI fakes has got a lot of people very worried, with experts warning that this technology could be used to spread lies and misinformation online. But actual evidence of this happening has so far been thin on the ground, which is why a new report from the Associated Pressmakes for such interesting reading.

The AP says it found evidence of a what seems to be a would-be spy using an AI-generated profile picture to fool contacts on LinkedIn.

The publication says that the fake profile, given the name Katie Jones, connected with a number of policy experts in Washington. These included a scattering of government figures such as a senator’s aide, a deputy assistant secretary of state, and Paul Winfree, an economist currently being considered for a seat on the Federal Reserve. Read More

#fake

Experts: Spy used AI-generated face to connect with targets

Katie Jones sure seemed plugged into Washington’s political scene. The 30-something redhead boasted a job at a top think tank and a who’s-who network of pundits and experts, from the centrist Brookings Institution to the right-wing Heritage Foundation. She was connected to a deputy assistant secretary of state, a senior aide to a senator and the economist Paul Winfree, who is being considered for a seat on the Federal Reserve.

But Katie Jones doesn’t exist, The Associated Press has determined. Instead, the persona was part of a vast army of phantom profiles lurking on the professional networking site LinkedIn. And several experts contacted by the AP said Jones’ profile picture appeared to have been created by a computer program. Read More

#fake

Top AI researchers race to detect 'deepfake' videos: 'We are outgunned'

Top artificial-intelligence researchers across the country are racing to defuse an extraordinary political weapon: computer-generated fake videos that could undermine candidates and mislead voters during the 2020 presidential campaign.

And they have a message: We’re not ready. Read More

#fake

Artificial intelligence reinforces power and privilege

What do a Yemeni refugee in the queue for food aid, a checkout worker in a British supermarket and a depressed university student have in common? They’re all being sifted by some form of artificial intelligence.

Advanced nations and the world’s biggest companies have thrown billions of dollars behind AI – a set of computing practices, including machine learning that collate masses of our data, analyse it, and use it to predict what we would do.

Yet cycles of hype and despair are inseparable from the history of AI. Is that clunky robot really about to take my job? How do the non-geeks among us distinguish AI’s promise from the hot air and decide where to focus concern? Read More

#surveillance

MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense

Recent works on gradient based attacks and universal perturbations can adversarially modify images to bring down the accuracy of state-of-the-art classification techniques based on deep neural networks to as low as 10% on popular datasets like MNIST and ImageNet. The design of general defense strategies against a wide range of such attacks remains a challenging problem. In this paper, we derive inspiration from recent advances in the fields of cybersecurity and multi-agent systems and propose to use the concept of Moving Target Defense (MTD) for increasing the robustness of a set of deep networks against such adversarial attacks. To this end, we formalize and exploit the notion of differential immunity of an ensemble of networks to specific attacks. To classify an input image, a trained network is picked from this set of networks by formulating the interaction between a Defender (who hosts the classification networks) and their (Legitimate and Malicious) Users as a repeated Bayesian Stackelberg Game (BSG). We empirically show that our approach, MTDeep reduces misclassification on perturbed images for MNIST and ImageNet datasets while maintaining high classification accuracy on legitimate test images. Lastly, we demonstrate that our framework can be used in conjunction with any existing defense mechanism to provide more resilience to adversarial attacks than those defense mechanisms by themselves. Read More

#assurance

How AI is catching people who cheat on their diets, job searches and school work

Artificial intelligence is putting new teeth on the old saw that cheaters never prosper.

New companies and new research are applying the cutting edge technology in at least three different ways to combat cheating — on homework, on the job hunt and even on one’s diet. Read More

#surveillance