How do new scientific disciplines get started? For Iyad Rahwan, a computational social scientist with self-described “maverick” tendencies, it happened on a sunny afternoon in Cambridge, Massachusetts, in October 2017. Rahwan and Manuel Cebrian, a colleague from the MIT Media Lab, were sitting in Harvard Yard discussing how to best describe their preferred brand of multidisciplinary research. The rapid rise of artificial intelligence technology had generated new questions about the relationship between people and machines, which they had set out to explore. Rahwan, for example, had been exploring the question of ethical behavior for a self-driving car — should it swerve to avoid an oncoming SUV, even if it means hitting a cyclist? — in his Moral Machine experiment.
“I was good friends with Iain Couzin, one of the world’s foremost animal behaviorists,” Rahwan said, “and I thought, ‘Why isn’t he studying online bots? Why is it only computer scientists who are studying AI algorithms?’
“All of a sudden,” he continued, “it clicked: We’re studying behavior in a new ecosystem.”
Two years later, Rahwan, who now directs the Center for Humans and Machines at the Max Planck Institute for Human Development, has gathered 22 colleagues — from disciplines as diverse as robotics, computer science, sociology, cognitive psychology, evolutionary biology, artificial intelligence, anthropology and economics — to publish a paper in Nature calling for the inauguration of a new field of science called “machine behavior.” Read More
Daily Archives: August 26, 2019
China's hackers are ransacking databases for your health data
In May 2017, the WannaCry ransomware spread around the globe. As the worm locked Windows PCs, the UK’s National Health Service quickly ground to a halt. 19,000 appointments were cancelled, doctor’s couldn’t access patient files and email accounts were taken offline.
But North Korean hackers behind WannaCry didn’t touch one thing: patient data. No personal information was stolen, the NHS has concluded. The cyberattack was purely to cause disruption and an attempt to earn the hermit state some much-needed cash.
The same can’t be said for China. New analysis has indicated that state-sponsored hackers from the country are targetting medical data from the healthcare industry. Research from security firm FireEye, has identified multiple groups with links to China attacking medical systems and databases around the world. These attacks include incidents in 2019, but also date back as far as 2013. Read More
How YouTube Radicalized Brazil
When Matheus Dominguez was 16, YouTube recommended a video that changed his life.
He was in a band in Niterói, a beach-ringed city in Brazil, and practiced guitar by watching tutorials online.
YouTube had recently installed a powerful new artificial intelligence system that learned from user behavior and paired videos with recommendations for others. One day, it directed him to an amateur guitar teacher named Nando Moura, who had gained a wide following by posting videos about heavy metal, video games and, most of all, politics.
In colorful and paranoid far-right rants, Mr. Moura accused feminists, teachers and mainstream politicians of waging vast conspiracies. Mr. Dominguez was hooked.
As his time on the site grew, YouTube recommended videos from other far-right figures. One was a lawmaker named Jair Bolsonaro, then a marginal figure in national politics — but a star in YouTube’s far-right community in Brazil, where the platform has become more widely watched than all but one TV channel.
Last year, he became President Bolsonaro. Read More
When speed kills: Lethal autonomous weapon systems, deterrence and stability
While the applications of artificial intelligence (AI) for militaries are broad, lethal autonomous weapon systems (LAWS) represent one possible usage of narrow AI by militaries. Research and development on LAWS by major powers, middle powers and non-state actors makes exploring the consequences for the security environment a crucial task. This article draws on classic research in security studies and examples from military history to assess the potential development and deployment of LAWS, as well as how they could influence arms races, the stability of deterrence, including strategic stability, the risk of crisis instability and wartime escalation. It focuses on these questions through the lens of two characteristics of LAWS: the potential for increased operational speed and the potential for decreased human control over battlefield choices. It also examines how these issues interact with the large uncertainty parameter associated with potential AI-based military capabilities at present, both in terms of the range of the possible and the opacity of their programming. Read More
How does the offense-defense balance scale?
We ask how the offense-defense balance scales, meaning how it changes as investments into a conflict increase. To do so we offer a general formalization of the offense-defense balance in terms of contest success functions. Simple models of ground invasions and cyberattacks that exploit software vulnerabilities suggest that, in both cases, growth in investments will favor offense when investment levels are sufficiently low and favor defense when they are sufficiently high. We refer to this phenomenon as offensive-then-defensive scaling or OD-scaling. Such scaling effects may help us understand the security implications of applications of artificial intelligence that in essence scale up existing capabilities. Read More