Will China Overtake the U.S. in Artificial Intelligence Research?

China not only has the world’s largest population and looks set to become the largest economy — it also wants to lead the world when it comes to artificial intelligence (AI).

In 2017, the Communist Party of China set 2030 as the deadline for this ambitious AI goal, and, to get there, it laid out a bevy of milestones to reach by 2020. These include making significant contributions to fundamental research, being a favoured destination for the world’s brightest talents and having an AI industry that rivals global leaders in the field.

As this first deadline approaches, researchers note impressive leaps in the quality of China’s AI research. They also predict a shift in the nation’s ability to retain homegrown talent. That is partly because the government has implemented some successful retainment programmes and partly because worsening diplomatic and trade relations mean that the United States — its main rival when it comes to most things, including AI — has become a less-attractive destination. Read More

#china-vs-us

Model Cards for Model Reporting

Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type [15]) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related artificial intelligence technology, increasing transparency into how well artificial intelligence technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation. Read More

#devops, #explainability, #governance

Datasheets for Datasets

The machine learning community currently has no standardized process for documenting datasets. To address this gap, we propose datasheets for datasets. In the electronics industry, every component, no matter how simple or complex, is accompanied with a datasheet that describes its operating characteristics, test results, recommended uses, and other information. By analogy, we propose that every dataset be accompanied with a datasheet that documents its motivation, composition, collection process, recommended uses, and so on. Datasheets for datasets will facilitate better communication between dataset creators and dataset consumers, and encourage the machine learning community to prioritize transparency and accountability. Read More

#devops, #explainability, #governance

Microsoft Icecaps: An open-source toolkit for conversation modeling

Icecaps provides an array of capabilities from recent conversation modeling literature. Several of these tools were driven by recent work done here at Microsoft Research, including personalization embeddings, maximum mutual information–based decoding, knowledge grounding, and an approach for enforcing more structure on shared feature representations to encourage more diverse and relevant responses. Our library leverages TensorFlow in a modular framework designed to make it easy for users to construct sophisticated training configurations using multi-task learning. In the coming months, we’ll equip Icecaps with pre-trained conversational models that researchers and developers can either use directly out of the box or quickly adapt to new scenarios by bootstrapping their own systems. Read More

#big7, #robotics

Software Ate The World, Now AI Is Eating Software

Marc Andreessen famously said that “Software is eating the world” and everyone gushed into the room. This was as much a writing on the wall for many traditional enterprises as it was wonderful news for the software industry.

…Little did Andreessen envision that the same software industry could be at risk of being eaten.

Fast forward to 2019 and the very same software industry is nervous. Very very nervous!

And the reason is AI. Read More

#artificial-intelligence

The AI Edge Engineer: Extending the power of CI/CD to Edge devices using containers

The Artificial Intelligence – Cloud and Edge implementations course explores the idea of extending CI/CD to Edge devices using containers. This post presents these ideas under the framework of the ‘AI Edge Engineer’.  Note that the views presented are personal; comments from those exploring similar ideas – especially in academia / research — are being sought.

The post discusses models of development for AI Edge Engineering based on deploying containers to Edge devices which unifies the Cloud and the EdgeRead More 

#devops, #iot

Jack Ma and Elon Musk’s AI debate in Shangha

Read More

#china-vs-us, #videos

Emotionless: Privacy-Preserving Speech Analysis for Voice Assistants

Voice-enabled interactions provide more human-like experiences in many popular IoT systems. Cloud-based speech analysis services extract useful information from voice input using speech recognition techniques. The voice signal is a rich resource that discloses several possible states of a speaker, such as emotional state, confidence and stress levels,physical condition, age, gender, and personal traits. Service providers can build a very accurate profile of a user’s demographic category, personal preferences, and may compromise privacy. To address this problem, a privacy-preserving intermediate layer between users and cloud services is proposed to sanitize the voice input. It aims to maintain utility while preserving user privacy. It achieves this by collecting real time speech data and analyzes the signal to ensure privacy protection prior to sharing of this data with services providers. Precisely, the sensitive representations are extracted from the raw signal by using transformation functions and then wrapped it via voice conversion technology.Experimental evaluation based on emotion recognition to assess the efficacy of the proposed method shows that identification of sensitive emotional state of the speaker is reduced by∼96 %. Read More

#nlp, #privacy, #voice

Philosophy will be the key that unlocks artificial intelligence

To state that the human brain has capabilities that are, in some respects, far superior to those of all other known objects in the cosmos would be uncontroversial. The brain is the only kind of object capable of understanding that the cosmos is even there, or why there are infinitely many prime numbers, or that apples fall because of the curvature of space-time, or that obeying its own inborn instincts can be morally wrong, or that it itself exists. Nor are its unique abilities confined to such cerebral matters. The cold, physical fact is that it is the only kind of object that can propel itself into space and back without harm, or predict and prevent a meteor strike on itself, or cool objects to a billionth of a degree above absolute zero, or detect others of its kind across galactic distances. Read More

#artificial-intelligence

Quantum radar: Experimental Microwave Quantum Illumination

Quantum illumination is a powerful sensing technique which employs entangled photons to boost the detection of low-reflectivity objects in environments with bright thermal noise. The promised advantage over classical strategies is particularly evident at low signal photon flux, a feature which makes the protocol an ideal prototype for non-invasive biomedical scanning or low-power short-range radar detection. In this work we experimentally demonstrate quantum illumination at microwave frequencies. We generate entangled fields using a Josephson parametric converter at millikelvin temperatures to illuminate a room-temperature object at a distance of 1 meter in a proof of principle bistatic radar setup. Using heterodyne detection and suitable data-processing at the receiver we observe an up to three times improved signal-to-noise ratio compared to the classical benchmark,the coherent-state transmitter, outperforming any classically-correlated radar source at the same signal powerand bandwidth. Quantum illumination is a first room-temperature application of microwave quantum circuits demonstrating quantum supremacy in detection and sensing. Read More

#quantum