Andrew Ng: Deep Learning, Education, and Real-World AI

Read More

#deep-learning, #videos

Generalization Through Memorization: Nearest Neighbor Language Models

We introduce kNN-LMs, which extend a pre-trained neural language model (LM) by linearly interpolating it with a k-nearest neighbors (kNN) model. The nearest neighbors are computed according to distance in the pre-trained LM embedding space, and can be drawn from any text collection, including the original LM training data. Applying this augmentation to a strong WIKITEXT-103 LM, with neighbors drawn from the original training set, our kNN-LM achieves a new stateof-the-art perplexity of 15.79 – a 2.9 point improvement with no additional training. We also show that this approach has implications for efficiently scaling up to larger training sets and allows for effective domain adaptation, by simply varying the nearest neighbor datastore, again without further training. Qualitatively, the model is particularly helpful in predicting rare patterns, such as factual knowledge. Together, these results strongly suggest that learning similarity between sequences of text is easier than predicting the next word, and that nearest neighbor search is an effective approach for language modeling in the long tail. Read More

#neural-networks

How Big Data Is Empowering AI and Machine Learning?

Read More

#artificial-intelligence

AI that Generates Inspirational pictures..

Read More

#image-recognition, #nlp, #videos

AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense

The leadership of the Department of Defense (DoD) tasked the Defense Innovation Board (DIB) with proposing Artificial Intelligence (AI) Ethics Principles for DoD for the design, development, and deployment of AI for both combat and non-combat purposes. Building upon the foundation of DoD’s existing ethical, legal, and policy frameworks and responsive
to the complexities of the rapidly evolving field of AI, the Board sought to develop principles consistent with the Department’s mission to deter war and ensure the country’s security. This document summarizes the DIB’s project and includes a brief background; an outline of enduring DoD ethics principles that transcend AI; a set of proposed AI Ethics Principles; and a set of recommendations to facilitate the Department’s adoption of these principles and advance the wider aim of promoting AI safety, security, and robustness. The DIB’s complete report includes detailed explanations and addresses the wider historical, policy, and theoretical context for these recommendations. It is available at innovation.defense.gov/ai.

The DIB is an independent federal advisory committee that provides advice and recommendations to DoD senior leaders; it does not speak for DoD. The purpose of this report is an earnest attempt to provide an opening for a thought-provoking dialogue internally to Department and externally in our wider society. The Department has the sole responsibility to determine how best to proceed with the recommendations made in this
report. Read More

#dod, #ethics

EU White Paper On Artificial Intelligence – A European approach to excellence and trust

Artificial Intelligence is developing fast. It will change our lives by improving healthcare (e.g. making diagnosis more precise, enabling better prevention of diseases), increasing the efficiency of farming, contributing to climate change mitigation and adaptation, improving the efficiency of production systems through predictive maintenance, increasing the security of Europeans, and in many other ways that we can only begin to imagine. At the same time, Artificial Intelligence (AI) entails a number of potential risks, such as opaque decision-making, gender-based or other kinds of discrimination, intrusion in our private lives or being used for criminal purposes.

Against a background of fierce global competition, a solid European approach is needed, building on the European strategy for AI presented in April 2011. To address the opportunities and challenges of AI, the EU must act as one and define its own way, based on European values, to promote the development and deployment of AI. Read More

#ethics

Linking sense of touch to facial movement inches robots toward ‘feeling’ pain

A robot with a sense of touch may one day “feel” pain, both its own physical pain and empathy for the pain of its human companions. Such touchy-feely robots are still far off, but advances in robotic touch-sensing are bringing that possibility closer to reality.

Sensors embedded in soft, artificial skin that can detect both a gentle touch and a painful thump have been hooked up to a robot that can then signal emotions, Minoru Asada reported February 15 at the annual meeting of the American Association for the Advancement of Science. This artificial “pain nervous system,” as Asada calls it, may be a small building block for a machine that could ultimately experience pain (in a robotic sort of way). Such a feeling might  also allow a robot to “empathize” with a human companion’s suffering. Read More

#robotics

How neuro-symbolic AI might finally make machines reason like humans

If you want a machine to learn to do something intelligent you either have to program it or teach it to learn.

For decades, engineers have been programming machines to perform all sorts of tasks — from software that runs on your personal computer and smartphone to guidance control for space missions.

But although computers are generally much faster and more precise than the human brain at sequential tasks, such as adding numbers or calculating chess moves, such programs are very limited in their scope. Something as trivial as identifying a bicycle among a crowded pedestrian street or picking up a hot cup of coffee from a desk and gently moving it to the mouth can send a computer into convulsions, never mind conceptualizing or abstraction (such as designing a computer itself).

The gist is that humans were never programmed (not like a digital computer, at least) — humans have become intelligent through learning. Read More

#human, #observational-learning

Amazon Uses Self-Learning to Teach Alexa to Correct its Own Mistakes

Digital assistant such as Alexa, Siri, Cortana or the Google Assistant are some of the best examples of mainstream adoption of artificial intelligence(AI) technologies. These assistants are getting more prevalent and tackling new domain-specific tasks which makes the maintenance of their underlying AI particularly challenging. The traditional approach to build digital assistant has been based on natural language understanding(NLU) and automatic speech recognition(ASR) methods which relied on annotated datasets. Recently, the Amazon Alexa team published a paper proposing a self-learning method to allow Alexa correct mistakes while interacting with users. Read More

#nlp, #observational-learning

Why Bill Gates thinks gene editing and artificial intelligence could save the world

Microsoft co-founder Bill Gates has been working to improve the state of global health through his nonprofit foundation for 20 years, and today he told the nation’s premier scientific gathering that advances in artificial intelligence and gene editing could accelerate those improvements exponentially in the years ahead.

“We have an opportunity with the advance of tools like artificial intelligence and gene-based editing technologies to build this new generation of health solutions so that they are available to everyone on the planet. And I’m very excited about this,” Gates said in Seattle during a keynote address at the annual meeting of the American Association for the Advancement of Science. Read More

#deep-learning