Artificial Intelligence Creates Better Art Than You (Sometimes)

People around the world are using intelligent machines to create new forms of art.

In 2018, in late October, a distinctly odd painting appeared at the fine art auction house Christe’s. At a distance, the painting looks like a 19th-century portrait of an austere gentleman dressed in black. …Our painter is a machine — an intelligent machine. Though the initial estimates had the portrait selling under $10,000, the painting would go on to sell for an incredible $432,500. The portrait was not created by an inspired human mind but was generated by artificial intelligence in the form of Generative Adversarial Networks or GAN. Read More

#gans

Evaluating Explainable Artificial Intelligence Methods for Multi-label Deep Learning Classification Tasks in Remote Sensing

Although deep neural networks hold the state-of-the-art in several remote sensing tasks, their black-box operation hinders the understanding of their decisions, concealing any bias and other shortcomings in datasets and model performance. To this end, we have applied explainable artificial intelligence (XAI) methods in remote sensing multi-label classification tasks towards producing human-interpretable explanations and improve transparency. In particular, we developed deep learning models with state-of-the-art performance in the benchmark BigEarthNet and SEN12MS datasets. Ten XAI methods were employed towards understanding and interpreting models’ predictions, along with quantitative metrics to assess and compare their performance. Numerous experiments were performed to assess the overall performance of XAI methods for straightforward prediction cases, competing multiple labels, as well as misclassification cases. According to our findings, Occlusion, Grad-CAM and Lime were the most interpretable and reliable XAI methods. However, none delivers high-resolution outputs, while apart from Grad-CAM, both Lime and Occlusion are computationally expensive. We also highlight different aspects of XAI performance and elaborate with insights on black-box decisions in order to improve transparency, understand their behavior and reveal, as well, datasets’ particularities. Read More

#explainability

The U.S. Government Needs to Overhaul Cybersecurity. Here’s How.

After the 2015 hack of the U.S. Office of Personnel Management, the SolarWinds breach, and—just weeks after SolarWinds—the latest Microsoft breach, it is by now clear that the U.S. federal government is woefully unprepared in matters of cybersecurity. Following the SolarWinds intrusion, White House leaders have called for a comprehensive cybersecurity overhaul to better protect U.S. critical infrastructure and data, and the Biden administration plans to release a new executive order to this end.

What should this reinvestment in cybersecurity look like? Although the United States is the home of many top cybersecurity companies, the U.S. government is behind where it should be both in technology modernization and in mindset. Best-in-class cyberdefense technologies have been available on the market for years, yet the U.S. government has failed to adopt them, opting instead to treat cybersecurity like a counterintelligence problem and focusing most of its resources on detection. Yet the government’s massive perimeter detection technology, Einstein, failed to detect the SolarWinds intrusion—which lays bare the inadequacy of this approach.  Read More

#cyber

The Limits of Political Debate

I.B.M. taught a machine to debate policy questions. What can it teach us about the limits of rhetorical persuasion?

We need A.I. to be more like a machine, supplying troves of usefully organized information. It can leave the bullshitting to us.

In February, 2011, an Israeli computer scientist named Noam Slonim proposed building a machine that would be better than people at something that seems inextricably human: arguing about politics. …In February, 2019, the machine had its first major public debate, hosted by Intelligence Squared, in San Francisco. The opponent was Harish Natarajan, a thirty-one-year-old British economic consultant, who, a few years earlier, had been the runner-up in the World Universities Debating Championship. The machine lost.

As Arthur Applbaum, a political philosopher who is the Adams Professor of Political Leadership and Democratic Values at Harvard’s Kennedy School, saw it, the particular adversarial format chosen for this debate had the effect of elevating technical questions and obscuring ethical ones. The audience had voted Natarajan the winner of the debate. But, Applbaum asked, what had his argument consisted of? “He rolled out standard objections: it’s not going to work in practice, and it will be wasteful, and there will be unintended consequences. If you go through Harish’s argument line by line, there’s almost no there there,” he said. Natarajan’s way of defeating the computer, at some level, had been to take a policy question and strip it of all its meaningful specifics. “It’s not his fault,” Applbaum said. There was no way that he could match the computer’s fact-finding. “So, instead, he bullshat.” Read More

#big7, #human

Poem Generator Web Application With Keras, React, and Flask

An interesting area of NLP is text generation and by extension, poem generation. This article describes a poem generator web app I built using Keras, Flask, and React.

Natural Language Processing (NLP) is an exciting branch of machine learning and artificial intelligence, as it is applied in speech recognition, language translation, human-computer interaction, sentiment analysis, etc. One of the interesting areas is text generation, and of particular interest to me, is poem generation.

In this article, I describe a poem generator web application, which I built using Deep Learning with Keras, Flask, and React. The core algorithm is from TensorFlow available in their notebook. The data it needs is an existing set of poems. The data are in three text files. Read More

#nlp, #python

A new era of innovation: Moore’s Law is not dead and AI is ready to explode

Moore’s Law is dead, right? Think again.

Although the historical annual improvement of about 40% in central processing unit performance is slowing, the combination of CPUs packaged with alternative processors is improving at a rate of more than 100% per annum. These unprecedented and massive improvements in processing power combined with data and artificial intelligence will completely change the way we think about designing hardware, writing software and applying technology to businesses.

Every industry will be disrupted. You hear that all the time. Well, it’s absolutely true and we’re going to explain why and what it all means.

In this Breaking Analysis, we’re going to unveil some data that suggests we’re entering a new era of innovation where inexpensive processing capabilities will power an explosion of machine intelligence applications. We’ll also tell you what new bottlenecks will emerge and what this means for system architectures and industry transformations in the coming decade. Read More

#strategy

Introducing Qiskit: Using Quantum Computers to Improve Machine Learning

Today, machine learning applications touch almost every angle of business, science, and private life, ranging from speech and image recognition to generative models to improve drug design. Machine learning’s primary goal is to train computers to make sense of an ever-expanding pool of data. However, in order to learn from these increasingly complex datasets, the underlying models, such as deep neural networks, also become more sophisticated and expensive to train.

This results in complicated models with very long training times that risk over-fitting without sufficient generalization. In other words, we must be vigilant that our models meaningfully understand our data, rather than merely memorizing what they have already seen. Therefore, a lot of effort is put into improving training algorithms of models, as well as dedicated classical hardware. Read More

#quantum

Monkey MindPong

Read More

#videos

Global Trends 2040

DURING THE PAST YEAR, THE COVID-19 PANDEMIC HAS REMINDED THE WORLD OF ITS FRAGILITY AND DEMONSTRATED THE INHERENT RISKS OF HIGH LEVELS OF INTERDEPENDENCE. IN COMING YEARS AND DECADES, THE WORLD WILL FACE MORE INTENSE AND CASCADING GLOBAL CHALLENGES RANGING FROM DISEASE TO CLIMATE CHANGE TO THE DISRUPTIONS FROM NEW TECHNOLOGIES AND FINANCIAL CRISES These challenges will repeatedly test the resilience and adaptability of communities, states, and the international system, often exceeding the capacity of existing systems and models. This looming disequilibrium between existing and future challenges and the ability of institutions and systems to respond is likely to grow and produce greater contestation at every level.

In this more contested world, communities are increasingly fractured as people seek security with like-minded groups based on established and newly prominent identities; states of all types and in all regions are struggling to meet the needs and expectations of more connected, more urban, and more empowered populations; and the international system is more competitive—shaped in part by challenges from a rising China—and at greater risk of conflict as states and nonstate actors exploit new sources of power and erode longstanding norms and institutions that have provided some stability in past decades. These dynamics are not fixed in perpetuity, however, and we envision a variety of plausible scenarios for the world of 2040—from a democratic renaissance to a transformation in global cooperation spurred by shared tragedy—depending on how these dynamics interact and human choices along the way. Read More

#ic

China leads the U.S. in three critical AI areas — data, applications, and integration — according to Bob Work

The US has a narrow lead on China in artificial intelligence, but the Chinese are catching up fast. In fact, they’re already at least narrowly ahead in three of six critical areas, the vice-chair of the National Security Commission on AI said today.

We do not believe China is ahead right now in AI” overall, Robert Work said, speaking at a Pentagon press conference alongside Lt. Gen. Mike Groen, the director of the Joint Artificial Intelligence Center. But, Work went on, “look, AI is not a single technology, it is a bundle of technologies” – what professionals in the field call the “AI stack.”

As Work and the commission’s final report explain it, the AI stack has six interdependent layers. The foundational layer is not technology but people who know what to do with it. The second most fundamental layer is data, the raw material machine learning must ingest en masse to evolve. Then there’s hardware, on which everything else runs; algorithms, the complex and ever-evolving equations that drive machine learning; applications, which apply algorithms to specific functions; and integration, which ties different applications together. Read More

#china-vs-us, #dod, #ic