A spy reportedly used an AI-generated profile picture to connect with sources on LinkedIn

Over the past few years, the rise of AI fakes has got a lot of people very worried, with experts warning that this technology could be used to spread lies and misinformation online. But actual evidence of this happening has so far been thin on the ground, which is why a new report from the Associated Pressmakes for such interesting reading.

The AP says it found evidence of a what seems to be a would-be spy using an AI-generated profile picture to fool contacts on LinkedIn.

The publication says that the fake profile, given the name Katie Jones, connected with a number of policy experts in Washington. These included a scattering of government figures such as a senator’s aide, a deputy assistant secretary of state, and Paul Winfree, an economist currently being considered for a seat on the Federal Reserve. Read More

#fake

Experts: Spy used AI-generated face to connect with targets

Katie Jones sure seemed plugged into Washington’s political scene. The 30-something redhead boasted a job at a top think tank and a who’s-who network of pundits and experts, from the centrist Brookings Institution to the right-wing Heritage Foundation. She was connected to a deputy assistant secretary of state, a senior aide to a senator and the economist Paul Winfree, who is being considered for a seat on the Federal Reserve.

But Katie Jones doesn’t exist, The Associated Press has determined. Instead, the persona was part of a vast army of phantom profiles lurking on the professional networking site LinkedIn. And several experts contacted by the AP said Jones’ profile picture appeared to have been created by a computer program. Read More

#fake

Top AI researchers race to detect 'deepfake' videos: 'We are outgunned'

Top artificial-intelligence researchers across the country are racing to defuse an extraordinary political weapon: computer-generated fake videos that could undermine candidates and mislead voters during the 2020 presidential campaign.

And they have a message: We’re not ready. Read More

#fake

Artificial intelligence reinforces power and privilege

What do a Yemeni refugee in the queue for food aid, a checkout worker in a British supermarket and a depressed university student have in common? They’re all being sifted by some form of artificial intelligence.

Advanced nations and the world’s biggest companies have thrown billions of dollars behind AI – a set of computing practices, including machine learning that collate masses of our data, analyse it, and use it to predict what we would do.

Yet cycles of hype and despair are inseparable from the history of AI. Is that clunky robot really about to take my job? How do the non-geeks among us distinguish AI’s promise from the hot air and decide where to focus concern? Read More

#surveillance

MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense

Recent works on gradient based attacks and universal perturbations can adversarially modify images to bring down the accuracy of state-of-the-art classification techniques based on deep neural networks to as low as 10% on popular datasets like MNIST and ImageNet. The design of general defense strategies against a wide range of such attacks remains a challenging problem. In this paper, we derive inspiration from recent advances in the fields of cybersecurity and multi-agent systems and propose to use the concept of Moving Target Defense (MTD) for increasing the robustness of a set of deep networks against such adversarial attacks. To this end, we formalize and exploit the notion of differential immunity of an ensemble of networks to specific attacks. To classify an input image, a trained network is picked from this set of networks by formulating the interaction between a Defender (who hosts the classification networks) and their (Legitimate and Malicious) Users as a repeated Bayesian Stackelberg Game (BSG). We empirically show that our approach, MTDeep reduces misclassification on perturbed images for MNIST and ImageNet datasets while maintaining high classification accuracy on legitimate test images. Lastly, we demonstrate that our framework can be used in conjunction with any existing defense mechanism to provide more resilience to adversarial attacks than those defense mechanisms by themselves. Read More

#assurance

How AI is catching people who cheat on their diets, job searches and school work

Artificial intelligence is putting new teeth on the old saw that cheaters never prosper.

New companies and new research are applying the cutting edge technology in at least three different ways to combat cheating — on homework, on the job hunt and even on one’s diet. Read More

#surveillance

The Threat of Google’s DeepMind

If you consider Google is the leader globally in artificial intelligence, DeepMind is their crown jewel.

When they moved the DeepMind Health unit, the healthcare subsidiary, into their main company — that broke a pledge that ‘data will not be connected to Google accounts’ — you knew Google was cutting corners.

Google’s AI Supremacy is an Existential Threat

Bigger than the Department of Justice going after Google for antitrust is the harm DeepMind could do to the future of artificial intelligence. They are arguably the leader in deep learning. The choices they make will decide many things about the fate of humanity in an AI-centric world.

The next real interface after smart phones is the neural interface and a Google powered neural interface (beyond ear buds and Voice AI) will power the next era of augmented humans. Read More

#artificial-intelligence, #singularity

Top 45 Artificial Intelligence Companies

Artificial intelligence has exploded in the past few years, with dozens of AI startups and major AI initiatives by big name firms alike. The New York Times estimates there are 45 AI firms working on chips alone, not to mention the dozens of AI software companies working on machine learning, deep learning and AI projects.

AI is driving significant investment from venture capitalist firms, giant firms like Microsoft and Google, academic research, and job openings across a multitude of sectors. All of this is documented in the AI Index, produced by Stanford University’s Human-Centered AI Institute. Read More

#investing

What is DataOps and Why It’s Critical to the Data Monetization Value Chain

In my previous blog “How DevOps Drives Analytics Operationalization and Monetization”, I discussed the critical and complementary role of DevOps to operationalize and monetize the analytics that came out of the Data Science development process. While the combination of Design Thinking and Data Science accelerate the creation of more effective, more predictive analytic modules (where analytic modules are packaged, reusable and extensible analytic modules), it’s the combination of Data Science and DevOps that drives analytic model operationalization and monetization. Read More

#data-lake, #data-science, #devops

Tech companies are enabling a “machine of deportation” say leading immigrant rights advocates

In the past year, many major tech companies such as Amazon, Palantir, Salesforce, and Microsoft have come under scrutiny for selling software to US federal immigration agencies. That’s because those agencies have been responsible for enforcing some of the controversial immigration policies that separate families at the border, detain children, and deport people seeking refuge back to dangerous places.

Jonathan Ryan, CEO of immigrant legal aid and services organization RAICES, and Erika Andiola, the organization’s chief advocacy officer, are making the case that tech companies need to realize the moral consequences of the industry’s complicity — and their ability to stop what many view is blatantly unethical treatment of refugees. Read More

#ethics