People around the world are using intelligent machines to create new forms of art.
In 2018, in late October, a distinctly odd painting appeared at the fine art auction house Christe’s. At a distance, the painting looks like a 19th-century portrait of an austere gentleman dressed in black. …Our painter is a machine — an intelligent machine. Though the initial estimates had the portrait selling under $10,000, the painting would go on to sell for an incredible $432,500. The portrait was not created by an inspired human mind but was generated by artificial intelligence in the form of Generative Adversarial Networks or GAN. Read More
Daily Archives: April 12, 2021
Evaluating Explainable Artificial Intelligence Methods for Multi-label Deep Learning Classification Tasks in Remote Sensing
Although deep neural networks hold the state-of-the-art in several remote sensing tasks, their black-box operation hinders the understanding of their decisions, concealing any bias and other shortcomings in datasets and model performance. To this end, we have applied explainable artificial intelligence (XAI) methods in remote sensing multi-label classification tasks towards producing human-interpretable explanations and improve transparency. In particular, we developed deep learning models with state-of-the-art performance in the benchmark BigEarthNet and SEN12MS datasets. Ten XAI methods were employed towards understanding and interpreting models’ predictions, along with quantitative metrics to assess and compare their performance. Numerous experiments were performed to assess the overall performance of XAI methods for straightforward prediction cases, competing multiple labels, as well as misclassification cases. According to our findings, Occlusion, Grad-CAM and Lime were the most interpretable and reliable XAI methods. However, none delivers high-resolution outputs, while apart from Grad-CAM, both Lime and Occlusion are computationally expensive. We also highlight different aspects of XAI performance and elaborate with insights on black-box decisions in order to improve transparency, understand their behavior and reveal, as well, datasets’ particularities. Read More
The U.S. Government Needs to Overhaul Cybersecurity. Here’s How.
After the 2015 hack of the U.S. Office of Personnel Management, the SolarWinds breach, and—just weeks after SolarWinds—the latest Microsoft breach, it is by now clear that the U.S. federal government is woefully unprepared in matters of cybersecurity. Following the SolarWinds intrusion, White House leaders have called for a comprehensive cybersecurity overhaul to better protect U.S. critical infrastructure and data, and the Biden administration plans to release a new executive order to this end.
What should this reinvestment in cybersecurity look like? Although the United States is the home of many top cybersecurity companies, the U.S. government is behind where it should be both in technology modernization and in mindset. Best-in-class cyberdefense technologies have been available on the market for years, yet the U.S. government has failed to adopt them, opting instead to treat cybersecurity like a counterintelligence problem and focusing most of its resources on detection. Yet the government’s massive perimeter detection technology, Einstein, failed to detect the SolarWinds intrusion—which lays bare the inadequacy of this approach. Read More
The Limits of Political Debate
I.B.M. taught a machine to debate policy questions. What can it teach us about the limits of rhetorical persuasion?
We need A.I. to be more like a machine, supplying troves of usefully organized information. It can leave the bullshitting to us.
In February, 2011, an Israeli computer scientist named Noam Slonim proposed building a machine that would be better than people at something that seems inextricably human: arguing about politics. …In February, 2019, the machine had its first major public debate, hosted by Intelligence Squared, in San Francisco. The opponent was Harish Natarajan, a thirty-one-year-old British economic consultant, who, a few years earlier, had been the runner-up in the World Universities Debating Championship. The machine lost.
As Arthur Applbaum, a political philosopher who is the Adams Professor of Political Leadership and Democratic Values at Harvard’s Kennedy School, saw it, the particular adversarial format chosen for this debate had the effect of elevating technical questions and obscuring ethical ones. The audience had voted Natarajan the winner of the debate. But, Applbaum asked, what had his argument consisted of? “He rolled out standard objections: it’s not going to work in practice, and it will be wasteful, and there will be unintended consequences. If you go through Harish’s argument line by line, there’s almost no there there,” he said. Natarajan’s way of defeating the computer, at some level, had been to take a policy question and strip it of all its meaningful specifics. “It’s not his fault,” Applbaum said. There was no way that he could match the computer’s fact-finding. “So, instead, he bullshat.” Read More
