The Computer Scientist Training AI to Think With Analogies

Melanie Mitchell has worked on digital minds for decades. She says they’ll never truly be like ours until they can make analogies.

The Pulitzer Prize-winning book Gödel, Escher, Bach inspired legions of computer scientists in 1979, but few were as inspired as Melanie Mitchell. After reading the 777-page tome, Mitchell, a high school math teacher in New York, decided she “needed to be” in artificial intelligence. She soon tracked down the book’s author, AI researcher Douglas Hofstadter, and talked him into giving her an internship. She had only taken a handful of computer science courses at the time, but he seemed impressed with her chutzpah and unconcerned about her academic credentials.

 Mitchell prepared a “last-minute” graduate school application and joined Hofstadter’s new lab at the University of Michigan in Ann Arbor. The two spent the next six years collaborating closely on Copycat, a computer program which, in the words of its co-creators, was designed to “discover insightful analogies, and to do so in a psychologically realistic way.” Read More

#human

New toolkit aims to help teams create responsible human-AI experiences

Microsoft has released the Human-AI eXperience (HAX) Toolkit, a set of practical tools to help teams strategically create and responsibly implement best practices when creating artificial intelligence technologies that interact with people.

The toolkit comes as AI-infused products and services, such as virtual assistants, route planners, autocomplete, recommendations and reminders, are becoming increasingly popular and useful for many people. But these applications have the potential to do things that aren’t helpful, like misunderstand a voice command or misinterpret an image. In some cases, AI systems can demonstrate disruptive behaviors or even cause harm. Read More

#big7, #devops, #human

Facebook is ditching plans to make an interface that reads the brain

The spring of 2017 may be remembered as the coming-out party for Big Tech’s campaign to get inside your head. That was when news broke of Elon Musk’s new brain-interface company, Neuralink, which is working on how to stitch thousands of electrodes into people’s brains. Days later, Facebook joined the quest when it announced that its secretive skunkworks, named Building 8, was attempting to build a headset or headband that would allow people to send text messages by thinking—tapping them out at 100 words per minute.

The company’s goal was a hands-free interface anyone could use in virtual reality. “What if you could type directly from your brain?” asked Regina Dugan, a former DARPA officer who was then head of the Building 8 hardware dvision. “It sounds impossible, but it’s closer than you realize.”

Now the answer is in—and it’s not close at all. Four years after announcing a “crazy amazing” project to build a “silent speech” interface using optical technology to read thoughts, Facebook is shelving the project, saying consumer brain-reading still remains very far off. Read More

#big7, #human

AI ethicist Kate Darling: ‘Robots can be our partners’

The MIT researcher says that for humans to flourish we must move beyond thinking of robots as potential future competitors.

Dr Kate Darling is a research specialist in human-robot interaction, robot ethics and intellectual property theory and policy at the Massachusetts Institute of Technology (MIT) Media Lab. In her new book, The New Breed, she argues that we would be better prepared for the future if we started thinking about robots and artificial intelligence (AI) like animals. Read More

#human, #robotics

Why scientists think this hack is crucial for lifelong learning

In season two of the show 30 Rock, Tina Fey’s character Liz Lemon says to her boss, “I have to do that thing rich people do, where they turn money into more money.” While our brains can’t passively invest in stocks for us and watch the money grow, they can do almost exactly that when working on a new skill: turn learning into more learning.

All you have to do is sit back and relax.

This is exemplified by a study published June 8 in Cell Reports. Scientists examined the fluctuating magnetic fields in the brains of participants asked to perform a sequential task repeatedly. They observed that in the brief breaks between the practice rounds, that task was replayed rapidly in their minds as if learning on its own. Read More

#human

Is ‘brain drift’ the key to machine consciousness?

Could this currently inexplicable phenomenon be what’s keeping our robots from experiencing reality?

Think about someone you love and the neurons in your brain will light up like a Christmas tree. But if you think about them again, will the same lights go off? Chances are: the answer’s no. And that could have big implications for the future of AI.

A team of neuroscientists from the University of Columbia in New York recently published research demonstrating what they refer to as “representational drift” in the brains of mice. Read More

#human

Brain Science 183: Jeff Hawkins shares his new theory of Intelligence

Read More
#human, #podcasts

Can you teach a machine to think?

The road to building an artificial general intelligence begins with stopping current AI models from perpetuating racism, sexism, and other forms of pernicious bias.

“Building a machine that can think and do things that people can do has been the goal of AI since the very beginning, but it’s been a long, long struggle. And past hype has led to failure. So this idea of artificial general intelligence has become, you know, very controversial and very divisive — but it’s having a comeback.” Read More

#human, #podcasts

Brain implants let paralyzed man write on a screen using thoughts alone

Researchers combine neural implants with AI to develop a “mindwriting” system that converts imagined writing to text on a screen.

The system uses two implanted electrode arrays that record the brain activity produced by thinking about writing letters. This information is then collected and processed in real time by a computer, which converts that data into words on a screen. Read More

#human

Optoelectronic intelligence

General intelligence involves the integration of many sources of information into a coherent, adaptive model of the world. To design and construct hardware for general intelligence, we must consider principles of both neuroscience and very-large-scale integration. For large neural systems capable of general intelligence, the attributes of photonics for communication and electronics for computation are complementary and interdependent. Using light for communication enables high fan-out as well as low-latency signaling across large systems with no traffic-dependent bottlenecks. For computation, the inherent nonlinearities, high speed, and low power consumption of Josephson circuits are conducive to complex neural functions. Operation at 4 K enables the use of single-photon detectors and silicon light sources, two features that lead to efficiency and economical scalability. Here, I sketch a concept for optoelectronic hardware, beginning with synaptic circuits, continuing through wafer-scale integration, and extending to systems interconnected with fiber-optic tracts, potentially at the scale of the human brain and beyond. Read More

#human