2020 Tencent AI White Paper (Translated)

The development of artificial intelligence is not necessarily calm. Since the big battles between Alpha Go and humans re-launched the artificial intelligence boom, it has experienced hype and frenzy. After the bubble faded, there have been difficulties with commercialization and challenges with privacy and ethics. But in the past six months, the integration of artificial intelligence and industry has never been closer. 2020 is a year that will go down in history. In the context of the global fight against the pandemic, artificial intelligence has responded quickly in medical, urban governance, industry, non-contact services and other fields, landing from the “cloud” and playing a key role in the epidemic, Improving the overall efficiency of the fight against the epidemic. The novel coronavirus pandemic has become the touchstone of digital technology. As an important driving force for a new round of technological revolution and industrial transformation, artificial intelligence has verified its true value to society. In the post-epidemic era, long-term economic recovery and development have become the focus. The new infrastructure has given artificial intelligence a new mission, requiring artificial intelligence technology to play a leading role in the future industry. Through deep integration with traditional industries, it will help the real economy to transform to digital and intelligent. It will give birth to new business formats and realize new transformations and new developments. From the demand side, the pressure of long-term economic transformation and the recent anti-epidemic restoration have formed a dual traction. All walks of life are fully aware that accelerating digital, networked, and intelligent transformation is an inevitable trend; From the supply side, AI technology is part of an important national strategy, and the various ecological layers of the industry have been continuously enriched and mature. Industry participants have focused on fields where there is concentrated value, and the main theme is to get rid of the “fake” and retain the “true.”

Therefore, we believe that artificial intelligence is entering the stage of integration of technology and industry. It is characterized by “ubiquitous intelligence”. Read More

#big7, #china-ai

Deep Learning Modeling of the Limit Order Book: A Comparative Perspective

The present work addresses theoretical and practical questions in the domain of Deep Learning for High Frequency Trading, with a thorough review and analysis of the literature and state-of-the-art models. Random models, Logistic Regressions, LSTMs, LSTMs equipped with an Attention mask, CNN-LSTMs and MLPs are compared on the same tasks, feature space, and dataset and clustered according to pairwise similarity and performance metrics. The underlying dimensions of the modeling techniques are hence investigated to understand whether these are intrinsic to the Limit Order Book’s dynamics. It is possible to observe that the Multilayer Perceptron performs comparably to or better than state-of-the-art CNN-LSTM architectures indicating that dynamic spatial and temporaldimensions are a good approximation of the LOB’s dynamics, but not necessarily the true underlying dimensions. Read More

#investing, #deep-learning

Banning TikTok is a terrible idea

It’s been a rough spell for TikTok. Secretary of State Mike Pompeo said the U.S. would ban the app because it “puts your private information in the hands of the Chinese Communist Party.” In case any observers dismissed his threat as more bark than bite, White House advisor Peter Navarro followed up with a warning that “strong action” against TikTok is coming. The onslaught is not just coming from U.S. policymakers, either. In June, the Indian government announced it would block TikTok along with nearly 60 other Chinese mobile apps for “stealing and surreptitiously transmitting users’ data in an unauthorized manner to servers which have locations outside India.”

There are several options in play for what so-called “strong action” could look like. Read More

#china-vs-us, #surveillance

Messier than Oil: Assessing Data Advantage in Military AI

“Data is the new oil,” or so we’ve been told. From policy pronouncements to media reports to op-eds, many have used the attractive analogy when discussing artificial intelligence. Kai-Fu Lee, author of AI Superpowers, has written, “in the age of AI, where data is the new oil, China is the new Saudi Arabia.”

Yet reality is far messier. With a population of 1.4 billion people, robust surveillance and data collection capabilities, and access to private sector data, the Chinese government appears to have vast quantities of data. But even if China has far more data than the United States, does this raw data necessarily translate into a meaningful advantage for China? And if so, is this enough to overtake the United States in AI? Both countries invest in AI for military applications; will China’s potentially greater access to commercial data accelerate its development of AI-enabled weapons relative to the United States?

This paper reviews the challenges in assessing whether the United States or China has a “data advantage” in the military AI realm—i.e., whether one country has access to more data in a way that confers an advantage in developing military AI systems. Read More

#china-vs-us, #dod

DeepMind’s Newest AI Programs Itself to Make All the Right Decisions

When Deep Blue defeated world chess champion Garry Kasparov in 1997, it may have seemed artificial intelligence had finally arrived. A computer had just taken down one of the top chess players of all time. But it wasn’t to be.

Though Deep Blue was meticulously programmed top-to-bottom to play chess, the approach was too labor-intensive, too dependent on clear rules and bounded possibilities to succeed at more complex games, let alone in the real world. The next revolution would take a decade and a half, when vastly more computing power and data revived machine learning, an old idea in artificial intelligence just waiting for the world to catch up. Read More

#reinforcement-learning

Discovering Reinforcement Learning Algorithms

Reinforcement learning (RL) algorithms update an agent’s parameters according to one of several possible rules, discovered manually through years of research. Automating the discovery of update rules from data could lead to more efficient algorithms, or algorithms that are better adapted to specific environments. Although there have been prior attempts at addressing this significant scientific challenge, it remains an open question whether it is feasible to discover alternatives to fundamental concepts of RL such as value functions and temporal-difference learning. This paper introduces a new meta-learning approach that discovers an entire update rule which includes both ‘what to predict’ (e.g. value functions) and ‘how to learn from it’ (e.g. bootstrapping) by interacting with a set of environments. The output of this method is an RL algorithm that we call Learned Policy Gradient (LPG). Empirical results show that our method discovers its own alternative to the concept of value functions. Furthermore it discovers a bootstrapping mechanism to maintain and use its predictions. Surprisingly, when trained solely on toy environments, LPG generalises effectively to complex Atari games and achieves non-trivial performance.This shows the potential to discover general RL algorithms from data. Read More

#reinforcement-learning

Artificial Intelligence Ethics Framework For The Intelligence Community

This is an ethics guide for United States Intelligence Community personnel on how to procure, design, build, use, protect, consume, and manage AI and related data. Answering these questions, in conjunction with your agency-specific procedures and practices, promotes ethical design of AI consistent with the Principles of AI Ethics for the Intelligence Community.

This guide is not a checklist and some of the concepts discussed herein may not apply in all instances. Instead, this guide is a living document intended to provide stakeholders with a reasoned approach to judgment and to assist with the documentation of considerations associated with the AI lifecycle. In doing so, this guide will enable mission through an enhanced understanding of goals between AI practitioners and managers while promoting the ethical use of AI. Read More

#ethics, #ic

‘AI on the Fly’: Moving AI Compute and Storage to the Data Source

The impact of artificial intelligence is starting to be realized across a broad spectrum of industries. Typically, deep learning (DL) training is a centralized datacenter process and inferencing occurs in the field. To build an AI system, data is collected, run through data scientist training models based on deep learning (DL) frameworks — on the fastest accelerated computers in the world — with the output sent to the field for an “AI at the Edge” system to inference from this model in day-to-day decision making. Read More

#iot

From Long-distance Entanglement to Building a Nationwide Quantum Internet

Today, many scientific experts recognize that building and scaling quantum-protected and enhanced communication networks are among the most important technological frontiers of the 21st century. The international research community perceives the construction of a first prototype global quantum network—the Quantum Internet—to be within reach over the next decade.

In February 2020, the U.S Department of Energy (DOE)’s Office of Advanced Scientific Computing Research hosted the Quantum Internet Blueprint workshop to define a potential roadmap toward building the first nationwide quantum Internet. Read More

#quantum

What can I do here? A Theory of Affordances in Reinforcement Learning

Reinforcement learning algorithms usually assume that all actions are always available to anagent. However, both people and animals un-derstand the general link between the features of their environment and the actions that are feasible.Gibson (1977) coined the term “affordances” to describe the fact that certain states enable an agent to do certain actions, in the context of embodied agents. In this paper, we develop a theory of affordances for agents who learn and plan in Markov Decision Processes. Affordances play a dual role in this case. On one hand, they allow faster planning, by reducing the number of actions available in any given situation. On the other hand, they facilitate more efficient and precise learning of transition models from data, especially when such models require function approximation. We establish these properties through theoretical results as well as illustrative examples. We also propose an approach to learn affordances and use it to estimate transition models that are simpler and generalize better. Read More

#human, #reinforcement-learning