Future Today Institute: China will become the world’s ‘unchallenged AI hegemon’ in 2018

Future Today Institute founder Amy Webb has released her annual tech trends report, and much of it focuses on the continuing impact of artificial intelligence. Other trends highlighted by the report include space travel, human gene editing, and a global shortage of data scientists. Webb, a quantitative futurist and professor of strategic foresight at the NYU Stern School of Business, released the report today in a presentation at SXSW in Austin, Texas.

Now in its 11th year, the report identifies 225 trends across 20 industries, with roughly 70 of those trends related directly to AI. Read More

#china-ai

The convergence of 5G and AI: A venture capitalist’s view

As an emerging-technology investor, I am always on the lookout for new technologies that can improve existing markets and enable new ones.

That’s why 5G really intrigues me. This advanced digital cellular network protocol opens up new frequency spectrum, bringing with it much higher throughput and latency performance. This positions 5G as a potentially game-changing technology and investment opportunity that could fuel a new generation of applications and solve some of the world’s biggest problems.

5G is not just “4G but better.” It taps new spectrum that will drive innovative business opportunities and use cases. For example, in the 28 GHz and 39 GHz bands, a.k.a. the “millimeter wave band,” reams of new bandwidth could transform the communications carrier landscape as we know it while further improving the end-user experience of mobility. Read More

#5g

Learning to Follow Directions in Street View

Navigating and understanding the real world remains a key challenge in machine learning and inspires a great variety of research in areas such as language grounding, planning, navigation and computer vision. We propose an instruction following task that requires all of the above, and which combines the practicality of simulated environments with the challenges of ambiguous, noisy real world data. StreetNav is built on top of Google Street View and provides visually accurate environments representing real places. Agents are given driving instructions which they must learn to interpret in order to successfully navigate in this environment. Since humans equipped with driving instructions can readily navigate in previously unseen cities, we set a high bar and test our trained agents for similar cognitive capabilities. Although deep reinforcement learning (RL) methods are frequently evaluated only on data that closely follow the training distribution, our dataset extends to multiple cities and has a clean train/test separation. This allows for thorough testing of generalisation ability. This paper presents the StreetNav environment and tasks, a set of novel models that establish strong baselines, and analysis of the task and the trained agents. Read More

#human

Deepmind teaches AI to follow navigational directions like humans

The brilliant minds at Google’s sister-company Deepmind are at it again. This time it appears they’ve developed a system by which driverless cars can navigate the same way humans do: by following directions.

;A long time ago, before the millennials were born, people had to drive in their cars without any form of GPS navigation. If you wanted to go some place new you used a paper map – they were like offline screenshots of a Google Maps image. Or someone gave you a list of directions. Read More

#human

How Artificial Intelligence Will Kickstart the Internet of Things

The possibilities that IoT brings to the table are endless. #IoT continues its run as one of the most popular technology buzzwords of the year, and now the new phase of IoT is pushing everyone to ask hard questions about the data collected by all devices and sensors of IoT.

IoT will produce a tsunami of big data, with the rapid expansion of devices and sensors connected to the Internet of Things continues, the sheer volume of data being created by them will increase to an astronomical level. This data will hold extremely valuable insights into what’s working well or what’s not. Read More

Accelerating Recurrent Neural Network Language Model Based Online Speech Recognition System

This paper presents methods to accelerate recurrent neural network based language models (RNNLMs) for online speech recognition systems. Firstly, a lossy compression of the past hidden layer outputs (history vector) with caching is introduced in order to reduce the number of LM queries. Next, RNNLM computations are deployed in a CPU-GPU hybrid manner, which computes each layer of the model on a more advantageous platform. The added overhead by data exchanges between CPU and GPU is compensated through a frame-wise batching strategy. The performance of the proposed methods evaluated on LibriSpeech1 test sets indicates that the reduction in history vector precision improves the average recognition speed by 1.23 times with minimum degradation in accuracy. On the other hand, the CPU-GPU hybrid parallelization enables RNNLM based real-time recognition with a four times improvement in speed. Read More

#nlp, #recurrent-neural-networks

Recurrent Neural Network Language Model Training with Noise Contrastive Estimation for Speech Recognition

In recent years recurrent neural network language models (RNNLMs) have been successfully applied to a range of tasks including speech recognition. However, an important issue that limits the quantity of data used, and their possible application areas, is the computational cost in training. A significant part of this cost is associated with the softmax function at the output layer, as this requires a normalization term to be explicitly calculated. This impacts both the training and testing speed, especially when a large output vocabulary is used. To address this problem, noise contrastive estimation (NCE) is explored in RNNLM training. NCE does not require the above normalization during both training and testing. It is insensitive to the output layer size. On a large vocabulary conversational telephone speech recognition task, a doubling in training speed on a GPU and a 56 times speed up in test time evaluation on a CPU were obtained. Read More

#nlp, #recurrent-neural-networks

Artificial Intelligence Can Now Write Amazing Content — What Does That Mean For Humans?

If you believe anything can and will be automated with artificial intelligence (AI), then you might not be surprised to know how many notable media organizations including The New York Times, Associated Press, Reuters, Washington Post, and Yahoo! Sports already use AI to generate content. The Press Association, for example, can now produce 30,000 local news stories a month using AI. You might think that these are formulaic who, what, where and when stories and you are right, some of them certainly are. But, today, AI-written content has expanded beyond formulaic writing to more creative writing endeavors such as poetry and novels.  Read More

#artificial-intelligence, #nlp

Incremental learning algorithms and applications

Incremental learning refers to learning from streaming data, which arrive over time, with limited memory resources and, ideally, without sacrificing model accuracy. This setting fits different application scenarios such as learning in changing environments, model personalisation, or lifelong learning, and it offers an elegant scheme for big data processing by means of its sequential treatment. In this contribution, we formalise the concept of incremental learning, we discuss particular challenges which arise in this setting, and we give an overview about popular approaches, its theoretical foundations, and applications which emerged in the last years. Read More

#machine-learning, #privacy, #transfer-learning

Can we stop AI outsmarting humanity?

It began three and a half billion years ago in a pool of muck, when a molecule made a copy of itself and so became the ultimate ancestor of all earthly life.

It began four million years ago, when brain volumes began climbing rapidly in the hominid line.

Fifty thousand years ago with the rise of Homo sapiens sapiens.

Ten thousand years ago with the invention of civilization.

Five hundred years ago with the invention of the printing press.

Fifty years ago with the invention of the computer.

In less than thirty years, it will end. Read More

#artificial-intelligence, #singularity