Summary: 99% of our application of NLP has to do with chatbots or translation. This is a very interesting story about expanding the bounds of NLP and feature creation to predict bestselling novels. The authors created over 20,000 NLP features, about 2,700 of which proved to be predictive with a 90% accuracy rate in predicting NYT bestsellers. Read More
Monthly Archives: December 2019
Artificial intelligence: How to measure the “I” in AI
This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.
Last week, Lee Se-dol, the South Korean Go champion who lost in a historical matchup against DeepMind’s artificial intelligence algorithm AlphaGo in 2016, declared his retirement from professional play.
“With the debut of AI in Go games, I’ve realized that I’m not at the top even if I become the number one through frantic efforts,” Lee told the Yonhap news agency. “Even if I become the number one, there is an entity that cannot be defeated.” Read More
On the Measure of Intelligence
To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems,as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches,while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates to-wards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks, such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to “buy” arbitrary levels of skills for a system, in a way that masks the system’s own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope,generalization difficulty,priors, and experience, as critical pieces to be accounted for in characterizing intelligent systems. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like.Finally, we present a new benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans. Read More
Virtual robots that teach themselves kung fu could revolutionize video games
In the not-so-distant future, characters might practice kung-fu kicks in a digital dojo before bringing their moves into the latest video game.
AI researchers at UC Berkeley and the University of British Columbia have created virtual characters capable of imitating the way a person performs martial arts, parkour, and acrobatics, practicing moves relentlessly until they get them just right.
The work could transform the way video games and movies are made. Read More
Machine learning ethics: what you need to know and what you can do
Ethics is, without a doubt, one of the most important topics to emerge in machine learning and artificial intelligence over the last year. While the reasons for this are complex, it nevertheless underlines that the area has reached technological maturity. After all, if artificial intelligence systems weren’t having a real, demonstrable impact on wider society, why would anyone be worried about its ethical implications?
It’s easy to dismiss the debate around machine learning and artificial intelligence as abstract and irrelevant to engineers’ and developers’ immediate practical concerns. However this is wrong. Ethics needs to be seen as an important practical consideration for anyone using and building machine learning systems. Read More
Huawei phones & 5G NO LONGER use U.S. components
According to the Wall Street Journal, citing a report from UBS and Japanese technology laboratory, Fomalhaut Techno Solutions, Huawei Mate 30 Pro no longer contains US-made parts. The Wall Street Journal pointed out that Huawei has made great progress in getting rid of American parts and chips. Companies such as iFixit and Tech Insights Inc. disassembled the Huawei Mate 30 Pro to check the source of the components and reached similar conclusions. This means that Huawei phones for next year will probably not use any component from the US. Read More
How artificial intelligence and data add value to businesses
Artificial intelligence (AI) is at the cutting edge of innovation. But how do companies find the expertise necessary to utilize it, and then take it to market? In this video, recorded at the Aspen Ideas Festival in June, Andrew Ng, cofounder of Coursera, AI Fund, and Landing.AI, discusses the difference between an AI-enabled business versus a true AI company, and how businesses can organize, hire, and make use of AI to add value. Read More
Chinese tech groups shaping UN facial recognition standards
Chinese technology companies are shaping new facial recognition and surveillance standards at the UN, according to leaked documents obtained by the Financial Times, as they try to open up new markets in the developing world for their cutting-edge technologies.
Companies such as ZTE, Dahua and China Telecom are among those proposing new international standards — specifications aimed at creating universally consistent technology — in the UN’s International Telecommunication Union (ITU) for facial recognition, video monitoring, city and vehicle surveillance. Read More
TrojDRL: Trojan Attacks on Deep Reinforcement Learning Agents
Recent work has identified that classification models implemented as neural networks are vulnerable to data-poisoning and Trojan attacks at training time. In this work, we show that these training-time vulnerabilities extend to deep reinforcement learning (DRL) agents and can be exploited by an adversary with access to the training process. In particular, we focus on Trojan attacks that augment the function of reinforcement learning policies with hidden behaviors. We demonstrate that such attacks can be implemented through minuscule data poisoning (as little as 0.025% of the training data) and in-band reward modification that does not affect the reward on normal inputs. The policies learned with our proposed attack approach perform imperceptibly similar to be-nign policies but deteriorate drastically when the Trojan is triggered in both targeted and untargeted settings. Furthermore, we show that existing Trojan defense mechanisms for classification tasks are not effective in the reinforcement learning setting. Read More
#adversarialCybersecurity in 2020: More targeted attacks, AI not a prevention panacea
Given the proliferation of high-profile attacks in 2019, the security outlook for next year—and the next decade—is filled with potential pitfalls, as challenges persist in maintaining the security profile in enterprises, particularly as security operations teams are spread thinner as attack surfaces widen. Read More