Google Brain’s AI achieves state-of-the-art text summarization performance

Summarizing text is a task at which machine learning algorithms are improving, as evidenced by a recent paper published by Microsoft. That’s good news — automatic summarization systems promise to cut down on the amount of message-reading enterprise workers do, which one survey estimates amounts to 2.6 hours each day.

Not to be outdone, a Google Brain and Imperial College London team built a system — Pre-training with Extracted Gap-sentences for Abstractive SUmmarization Sequence-to-sequence, or Pegasus — that leverages Google’s Transformers architecture combined with pretraining objectives tailored for abstractive text generation. They say it achieves state-of-the-art results in 12 summarization tasks spanning news, science, stories, instructions, emails, patents, and legislative bills, and that it shows “surprising” performance on low-resource summarization, surpassing previous top results on six data sets with only 1,000 examples. Read More

#news-summarization, #nlp

Relative contributions of Shakespeare and Fletcher in Henry VIII: An Analysis Based on Most Frequent Words and Most Frequent Rhythmic Patterns

The versified play Henry VIII is nowadays widely recognized to be a collaborative work not written solely by William Shakespeare. We employ combined analysis of vocabulary and versification together with machine learning techniques to determine which authors also took part in the writing of the play and what were their relative contributions. Unlike most previous studies, we go beyond the attribution of particular scenes and use the rolling attribution approach to determine the probabilities of authorship of pieces of texts, without respecting the scene boundaries. Our results highly support the canonical division of the play between William Shakespeare and John Fletcher proposed by James Spedding, but also bring new evidence supporting the modifications proposed later by Thomas Merriam. Read More

#nlp

Reimagining Reinforcement Learning – Upside Down

For all the hype around winning game play and self-driving cars, traditional Reinforcement Learning (RL) has yet to deliver as a reliable tool for ML applications.  Here we explore the main drawbacks as well as an innovative approach to RL that dramatically reduces the training compute requirement and time to train. Read More

#reinforcement-learning

How Federated Learning is going to revolutionize AI

This year we observed an amazing astronomical phenomenon, which was, a picture of a black hole for the first time. But did you know this black hole was more than 50 million light years away? And for capturing this picture, scientists require a single disk telescope that needs to be as big as the size of the earth! Since it was practically impossible to create such a telescope, they brought together a network of telescopes from across the world — the Event Horizon Telescope thus created was a large computational telescope with an aperture of the same diameter as that of the earth.

This is an excellent example of decentralized computation and it displays the power of decentralized learning that can be exploited in other fields as well.

Formed on the same principles, a new framework has emerged in AI which has the capability to compute across millions of devices and consolidate those results to provide better predictions for enhancing user experience. Welcome to the era of federated (decentralised) machine learning. Read More

#federated-learning, #splitnn

Is China Beating America to AI Supremacy?

Beijing is not just trying to master artificial intelligence—it is succeeding. AI will have as transformative an impact on commerce and national security over the next two decades as semiconductors, computers and the web have had over the past quarter century.

WE BEGIN with four key points. First, most Americans believe that U.S. leadership in advanced technologies is so entrenched that it is unassailable.

Second, China’s zeal to master AI goes far beyond its recognition that this suite of technologies promises to be the biggest driver of economic advances in the next quarter century.

Third, while we share the general enthusiasm about AI’s potential to make huge improvements in human wellbeing, the development of machines with intelligence vastly superior to humans will pose special, perhaps even unique, risks.

Fourth, China’s advantages in size, data collection and national determination have allowed it, over the past decade, to close the gap with American leaders of this industry. Read More

#china-ai, #china-vs-us

AI has a privacy problem, but these techniques could fix it

Artificial intelligence promises to transform — and indeed, has already transformed — entire industries, from civic planning and health care to cybersecurity. But privacy remains an unsolved challenge in the industry, particularly where compliance and regulation are concerned.

Recent controversies put the problem into sharp relief. The Royal Free London NHS Foundation Trust, a division of the U.K.’s National Health Service based in London, provided Alphabet’s DeepMind with data on 1.6 million patients without their consent. Google — whose health data-sharing partnership with Ascension became the subject of scrutiny in November — abandoned plans to publish scans of chest X-rays over concerns that they contained personally identifiable information. This past summer, Microsoft quietly removed a data set (MS Celeb) with more than 10 million images of people after it was revealed that some weren’t aware they had been included. Read More

#federated-learning, #homomorphic-encryption, #split-learning

Job Role: AI Strategist

Takeaway: AI strategists have important roles not just in shepherding projects through to completion, but in making them make sense in a corporate context.

What is an AI strategist? It’s an interesting question, particularly in the context of how artificial intelligence (AI) is reshaping the business world and revolutionizing how companies sell products and services. (Watch: 3 Key Breakthroughs That Paved the Way for Artificial Intelligence.) Read More

#strategy

Twelve Million Phones, One Dataset, Zero Privacy

Every minute of every day, everywhere on the planet, dozens of companies — largely unregulated, little scrutinized — are logging the movements of tens of millions of people with mobile phones and storing the information in gigantic data files. The Times Privacy Project obtained one such file, by far the largest and most sensitive ever to be reviewed by journalists. It holds more than 50 billion location pings from the phones of more than 12 million Americans as they moved through several major cities, including Washington, New York, San Francisco and Los Angeles.

Each piece of information in this file represents the precise location of a single smartphone over a period of several months in 2016 and 2017. Read More

#cyber, #privacy, #surveillance, #wifi

Life after artificial intelligence

AI stands to be the most radically transformative technology ever developed by humankind. What hypothetical situations are looming right around the corner as AI technology rises?

What will we invent after we invent everything that can be invented?

Artificial intelligence stands to be the most radically transformative technology ever developed by the human race. As a former artificial intelligence entrepreneur turned investor, I spend a lot of time thinking about the future of this technology: where it’s taking us and how our lives are going to reform around it. We humans tend to develop emergent technologies to the nth degree, so I think there is a certain inevitability to the far-out techno-utopian visions from certain branches of science fiction — it just makes common sense to me and many others. Why shouldn’t AI change everything? Read More

#artificial-intelligence, #strategy

XAI—Explainable artificial intelligence

Explainability is essential for users to effectively understand, trust, and manage powerful artificial intelligence applications.

Recent successes in machine learning (ML) have led to a new wave of artificial intelligence (AI) applications that offer extensive benefits to a diverse range of fields. However, many of these systems are not able to explain their autonomous decisions and actions to human users. Explanations may not be essential for certain AI applications, and some AI researchers argue that the emphasis on explanation is misplaced, too difficult to achieve, and perhaps unnecessary. However, for many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners [see recent reviews (13)].

Recent AI successes are largely attributed to new ML techniques that construct models in their internal representations. These include support vector machines (SVMs), random forests, probabilistic graphical models, reinforcement learning (RL), and deep learning (DL) neural networks. Although these models exhibit high performance, they are opaque in terms of explainability. There may be inherent conflict between ML performance (e.g., predictive accuracy) and explainability. Often, the highest performing methods (e.g., DL) are the least explainable, and the most explainable (e.g., decision trees) are the least accurate. Figure 1 illustrates this with a notional graph of the performance-explainability tradeoff for some of the ML techniques. Read More

#explainability