Google’s quiet experiments may lead to smart tattoos, holographic glasses

A simple pair of sunglasses that projects holographic icons. A smartwatch that has a digital screen but analog hands. A temporary tattoo that, when applied to your skin, transforms your body into a living touchpad. A virtual reality controller that lets you pick up objects in digital worlds and feel their weight as you swing them around. Those are some of the projects Google has quietly been developing or funding, according to white papers and demo videos, in an effort to create the next generation of wearable technology devices. Read More

#image-recognition, #iot

Is Artificial General Intelligence (AGI) On The Horizon? Interview With Dr. Ben Goertzel, CEO & Founder, SingularityNET Foundation

The ultimate vision of artificial intelligence are systems that can handle the wide range of cognitive tasks that humans can. The idea of a single, general intelligence is referred to as Artificial General Intelligence (AGI), which encopmasses the idea of a single, generally intelligent system that can act and think much like humans. However, we have not yet achieved this concept of the generally intelligent system and as such, current AI applications are only capable of narrow applications of AI such as recognition systems, hyperpersonaliztion tools and recommendation systems, and even autonomous vehicles. This raises the question: Is AGI really around the corner, or are we chasing an elusive goal that we may never realize?  Read More

#human

What is adversarial machine learning?

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.

To human observers, the following two images are identical. But researchers at Google showed in 2015 that a popular object detection algorithm classified the left image as “panda” and the right one as “gibbon.” And oddly enough, it had more confidence in the gibbon image. Read More

#adversarial

Headline Generation: Learning from Decomposable Document Titles

We propose a novel method for generating titles for unstructured text documents. We reframe the problem as a sequential question-answering task. A deep neural network is trained on document-title pairs with decomposable titles, meaning that the vocabulary of the title is a subset of the vocabulary of the document. To train the model we use a corpus of millions of publicly available document-title pairs: news articles and headlines. We present the results of a randomized double-blind trial in which subjects were unaware of which titles were human or machine-generated. When trained on approximately 1.5 million news articles, the model generates headlines that humans judge to be as good or better than the original human-written headlines in the majority of cases. Read More

#nlp

Image Search with Text Feedback by Visiolinguistic Attention Learning

Image search with text feedback has promising impacts in various real-world applications, such as e-commerce and internet search. Given a reference image and text feedback from user, the goal is to retrieve images that not only resemble the input image, but also change certain aspects in accordance with the given text. This is a challenging task as it requires the synergistic understanding of both image and text. In this work, we tackle this task by a novel Visiolinguistic Attention Learning (VAL) framework. Specifically, we propose a composite transformer that can be seamlessly plugged in a CNN to selectively preserve and transform the visual features conditioned on language semantics. By inserting multiple composite transformers at varying depths,VAL is incentive to encapsulate the multi-granular visiolinguistic information, thus yielding an expressive representation for effective image search. We conduct comprehensive evaluation on three datasets: Fashion200k, Shoes and FashionIQ. Extensive experiments show our model exceedsexisting approaches on all datasets, demonstrating consistent superiority in coping with various text feedbacks, including attribute-like and natural language descriptions. Read More

#big7, #image-recognition, #nlp

A New Map Shows the Inescapable Creep of Surveillance

The Atlas of Surveillance shows which tech law enforcement agencies across the country have acquired. It’s a sobering look at the present-day panopticon.

Over 1,300 partnerships with Ring. Hundreds of facial recognition systems. Dozens of cell-site simulator devices. The surveillance apparatus in the United States takes all kinds of forms in all kinds of places—a huge number of which populate a new map called the Atlas of Surveillance.

A collaboration between the Electronic Frontier Foundation and the University of Nevada, Reno Reynolds School of Journalism, the Atlas of Surveillance offers an omnibus look not only at what technologies law enforcement agencies deploy, but where they do it. Read More

#surveillance

Facebook built a new fiber-spinning robot to make internet service cheaper

The robot’s code name is Bombyx, which is Latin for silkworm, and pilot tests with the machine begin next year.

The robot rests delicately atop a power line, balanced high above the ground, almost as if it’s floating. Like a short, stocky tightrope walker, it gradually makes its way forward, leaving a string of cable in its wake. When it comes to a pole, it gracefully elevates its body to pass the roadblock and keep chugging along. Read More

#big7, #investing, #robotics

Kai-Fu Lee: AI Superpowers – China and Silicon Valley

Read More

#china-vs-us, #podcasts, #videos

Microsoft spins out 5-year-old Chinese chatbot Xiaoice

Microsoft is shedding its empathetic chatbot Xiaoice into an independent entity, the U.S. software behemoth said (in Chinese) Monday, confirming an earlier report by the Chinese news site Chuhaipost in June. Read More

Read More

#chatbots, #videos

There’s plenty of room at the Top: What will drive computer performance after Moore’s law?

The doubling of the number of transistors on a chip every 2 years, a seemly inevitable trend that has been called Moore’s law, has contributed immensely to improvements in computer performance. However, silicon-based transistors cannot get much smaller than they are today, and other approaches should be explored to keep performance growing. Leiserson et al. review recent examples and argue that the most promising place to look is at the top of the computing stack, where improvements in software, algorithms, and hardware architecture can bring the much-needed boost. Read More

#performance, #nvidia