The Neuro-Symbolic Concept Learner: interpreting scenes, words, and sentences from natural supervision

We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them; instead, our model learns by simply looking at images and reading paired questions and answers. Our model builds an object-based scene representation and translates sentences into executable, symbolic programs. To bridge the learning of two modules, we use a neuro-symbolic reasoning module that executes these programs on the latent scene representation. Analogical to human concept learning, the perception module learns visual concepts based on the language description of the object being referred to. Meanwhile, the learned visual concepts facilitate learning new words and parsing new sentences. We use curriculum learning to guide the searching over the large compositional space of images and language. Extensive experiments demonstrate the accuracy and efficiency of our model on learning visual concepts, word representations, and semantic parsing of sentences. Further, our method allows easy generalization to new object attributes, compositions, language concepts, scenes and questions, and even new program domains. It also empowers applications including visual question answering and bidirectional image-text retrieval. Read More

#human

Intelligent Machines Two rival AI approaches combine to let machines learn about the world like a child

Over the decades since the inception of artificial intelligence, research in the field has fallen into two main camps. The “symbolists” have sought to build intelligent machines by coding in logical rules and representations of the world. The “connectionists” have sought to construct artificial neural networks, inspired by biology, to learn about the world. The two groups have historically not gotten along.

But a new paper from MIT, IBM, and DeepMind shows the power of combining the two approaches, perhaps pointing a way forward for the field. The team, led by Josh Tenenbaum, a professor at MIT’s Center for Brains, Minds, and Machines, created a computer program called a neuro-symbolic concept learner (NS-CL) that learns about the world (albeit a simplified version) just as a child might—by looking around and talking. Read More

#human

Artificial intelligence: The EU’s 7 steps for trusty AI

Do you trust AI? If not, what would it take? The European Commission says there are seven steps to building trust in artificial intelligence. It’s published the latest findings from a high-level expert group. Read More

#ethics