Twelve Million Phones, One Dataset, Zero Privacy

Every minute of every day, everywhere on the planet, dozens of companies — largely unregulated, little scrutinized — are logging the movements of tens of millions of people with mobile phones and storing the information in gigantic data files. The Times Privacy Project obtained one such file, by far the largest and most sensitive ever to be reviewed by journalists. It holds more than 50 billion location pings from the phones of more than 12 million Americans as they moved through several major cities, including Washington, New York, San Francisco and Los Angeles.

Each piece of information in this file represents the precise location of a single smartphone over a period of several months in 2016 and 2017. Read More

#cyber, #privacy, #surveillance, #wifi

Life after artificial intelligence

AI stands to be the most radically transformative technology ever developed by humankind. What hypothetical situations are looming right around the corner as AI technology rises?

What will we invent after we invent everything that can be invented?

Artificial intelligence stands to be the most radically transformative technology ever developed by the human race. As a former artificial intelligence entrepreneur turned investor, I spend a lot of time thinking about the future of this technology: where it’s taking us and how our lives are going to reform around it. We humans tend to develop emergent technologies to the nth degree, so I think there is a certain inevitability to the far-out techno-utopian visions from certain branches of science fiction — it just makes common sense to me and many others. Why shouldn’t AI change everything? Read More

#artificial-intelligence, #strategy

XAI—Explainable artificial intelligence

Explainability is essential for users to effectively understand, trust, and manage powerful artificial intelligence applications.

Recent successes in machine learning (ML) have led to a new wave of artificial intelligence (AI) applications that offer extensive benefits to a diverse range of fields. However, many of these systems are not able to explain their autonomous decisions and actions to human users. Explanations may not be essential for certain AI applications, and some AI researchers argue that the emphasis on explanation is misplaced, too difficult to achieve, and perhaps unnecessary. However, for many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners [see recent reviews (13)].

Recent AI successes are largely attributed to new ML techniques that construct models in their internal representations. These include support vector machines (SVMs), random forests, probabilistic graphical models, reinforcement learning (RL), and deep learning (DL) neural networks. Although these models exhibit high performance, they are opaque in terms of explainability. There may be inherent conflict between ML performance (e.g., predictive accuracy) and explainability. Often, the highest performing methods (e.g., DL) are the least explainable, and the most explainable (e.g., decision trees) are the least accurate. Figure 1 illustrates this with a notional graph of the performance-explainability tradeoff for some of the ML techniques. Read More

#explainability

The Pentagon’s AI Chief Prepares for Battle

Nearly every day, in war zones around the world, American military forces request fire support. By radioing coordinates to a howitzer miles away, infantrymen can deliver the awful ruin of a 155-mm artillery shell on opposing forces. If defense officials in Washington have their way, artificial intelligence is about to make that process a whole lot faster.

The effort to speed up fire support is one of a handful initiatives that Lt. Gen. Jack Shanahan describes as the “lower consequence missions” that the Pentagon is using to demonstrate how it can integrate artificial intelligence into its weapons systems. As the head of the Joint Artificial Intelligence Center, a 140-person clearinghouse within the Department of Defense focused on speeding up AI adoption, Shanahan and his team are building applications in well-established AI domains—tools for predictive maintenance and health record analysis—but also venturing into the more exotic, pursuing AI capabilities that would make the technology a centerpiece of American warfighting. Read More

#dod

In Event of Moon Disaster – Nixon Deepfake Clips

Read More

This Nixon Deepfake Is an Alternate Reality Where Apollo 11 Fails

Deepfake technology makes the impossible, possible—well, at least visually possible. In this case, we’re talking about Richard Nixon and a speech of his that never actually occurred—a speech where he announces the death of all three Apollo 11 astronauts on the surface of the moon. Read More

#fake, #videos

Is AI About to Hit a Wall?

There have been several stories over the last several months around the theme that AI is about to hit a wall.  That the rapid improvements we’ve experienced and the benefits we’ve accrued can’t continue at the current pace.  It’s worth taking a look at these arguments to see if we should be adjusting our plans and expectations. Read More

#artificial-intelligence