Brain implants let paralyzed man write on a screen using thoughts alone

Researchers combine neural implants with AI to develop a “mindwriting” system that converts imagined writing to text on a screen.

The system uses two implanted electrode arrays that record the brain activity produced by thinking about writing letters. This information is then collected and processed in real time by a computer, which converts that data into words on a screen. Read More

#human

Optoelectronic intelligence

General intelligence involves the integration of many sources of information into a coherent, adaptive model of the world. To design and construct hardware for general intelligence, we must consider principles of both neuroscience and very-large-scale integration. For large neural systems capable of general intelligence, the attributes of photonics for communication and electronics for computation are complementary and interdependent. Using light for communication enables high fan-out as well as low-latency signaling across large systems with no traffic-dependent bottlenecks. For computation, the inherent nonlinearities, high speed, and low power consumption of Josephson circuits are conducive to complex neural functions. Operation at 4 K enables the use of single-photon detectors and silicon light sources, two features that lead to efficiency and economical scalability. Here, I sketch a concept for optoelectronic hardware, beginning with synaptic circuits, continuing through wafer-scale integration, and extending to systems interconnected with fiber-optic tracts, potentially at the scale of the human brain and beyond. Read More

#human

Journey to the center of the neuron

Every single one of your thoughts is made possible by your biological neurons. And behind many of the most useful A.I architectures is an entity inspired by them. Neurons are at the epicenter of the processing that underpins the complexity produced by intelligent systems. Curious to know more about the engine of your thoughts and about how they compare to their artificial counterparts? Let’s do it!

A.I neurons were originally inspired by our biological ones, yet they are very different. And why shouldn’t they be? There are many ways to get to the same destination and in the same way that human flight got inspired but didn’t copy part by part the way that birds fly, our artificial neurons are only partially inspired by our biological ones.

And yet, our biological neurons are way more complex than our artificial ones and hold so much rich detail and so many mysteries within. Even if we don’t need to copy the way biological neurons work, understanding what is different between both entities could give us new clues about how to move towards a more flexible form of artificial intelligence. Read More

#human

Artificial intelligence looks for a ‘language’ of cancer and Alzheimer’s

Researchers use machine learning to look for a biological language for disease in protein sequences.

Artificial intelligence is being used to try to crack open all kinds of problems. Some experts think that techniques used to predict what types of movies or TV shows someone will like or what word will come next in a sentence could be applied to biology. A group of researchers is hoping to use algorithms and language processing to find mistakes in cells that are causing disease, like cancer, Alzheimer’s disease and neurodegenerative disorders.

A team based at St John’s College, University of Cambridge think that machine learning technology can be used to find a kind of “biological language” for disease in the body. “Bringing machine-learning technology into research into neurodegenerative diseases and cancer is an absolute game-changer,” says Tuomas Knowles, one of the authors of the paper and a Fellow at St John’s College, in a press release. “Ultimately, the aim will be to use artificial intelligence to develop targeted drugs to dramatically ease symptoms or to prevent dementia happening at all.” Read More

#human

Geoffrey Hinton has a hunch about what’s next for AI

A decade ago, the artificial-intelligence pioneer transformed the field with a major breakthrough. Now he’s working on a new imaginary system named GLOM.

Back in November, the computer scientist and cognitive psychologist Geoffrey Hinton had a hunch. After a half-century’s worth of attempts—some wildly successful—he’d arrived at another promising insight into how the brain works and how to replicate its circuitry in a computer.

“It’s my current best bet about how things fit together,” Hinton says from his home office in Toronto, where he’s been sequestered during the pandemic. If his bet pays off, it might spark the next generation of artificial neural networks—mathematical computing systems, loosely inspired by the brain’s neurons and synapses, that are at the core of today’s artificial intelligence. His “honest motivation,” as he puts it, is curiosity. But the practical motivation—and, ideally, the consequence—is more reliable and more trustworthy AI.

A Google engineering fellow and cofounder of the Vector Institute for Artificial Intelligence, Hinton wrote up his hunch in fits and starts, and at the end of February announced via Twitter that he’d posted a 44-page paper on the arXiv preprint server. He began with a disclaimer: “This paper does not describe a working system,” he wrote. Rather, it presents an “imaginary system.” He named it, “GLOM.” The term derives from “agglomerate” and the expression “glom together.” Read More

#human, #image-recognition

The Limits of Political Debate

I.B.M. taught a machine to debate policy questions. What can it teach us about the limits of rhetorical persuasion?

We need A.I. to be more like a machine, supplying troves of usefully organized information. It can leave the bullshitting to us.

In February, 2011, an Israeli computer scientist named Noam Slonim proposed building a machine that would be better than people at something that seems inextricably human: arguing about politics. …In February, 2019, the machine had its first major public debate, hosted by Intelligence Squared, in San Francisco. The opponent was Harish Natarajan, a thirty-one-year-old British economic consultant, who, a few years earlier, had been the runner-up in the World Universities Debating Championship. The machine lost.

As Arthur Applbaum, a political philosopher who is the Adams Professor of Political Leadership and Democratic Values at Harvard’s Kennedy School, saw it, the particular adversarial format chosen for this debate had the effect of elevating technical questions and obscuring ethical ones. The audience had voted Natarajan the winner of the debate. But, Applbaum asked, what had his argument consisted of? “He rolled out standard objections: it’s not going to work in practice, and it will be wasteful, and there will be unintended consequences. If you go through Harish’s argument line by line, there’s almost no there there,” he said. Natarajan’s way of defeating the computer, at some level, had been to take a policy question and strip it of all its meaningful specifics. “It’s not his fault,” Applbaum said. There was no way that he could match the computer’s fact-finding. “So, instead, he bullshat.” Read More

#big7, #human

ContinualAI

Humans have the extraordinary ability to learn continually from experience. Not only can we apply previously learned knowledge and skills to novel situations, but we can also use these as the foundation for later learning. One of the grand goals of AI is to build artificial “continual learning” agents that construct a sophisticated understanding of the world from their own experience through the incremental development of increasingly complex knowledge and skills.

ContinualAI is an official non-profit research organization and the largest open community on Continual Learning for AI. Our core mission is to fuel continual learning research by connecting researchers in the field and offering a platform to share, discuss, and produce original research on a topic we consider fundamental for the future of AI. Read More

#human

AI Weekly: Continual learning offers a path toward more human like AI

State-of-the-art AI systems are remarkably capable, but they suffer from a key limitation: statisticity. Algorithms are trained once on a dataset and rarely again, making them incapable of learning new information without retraining. This is as opposed to the human brain, which learns constantly, using knowledge gained over time and building on it as it encounters new information. While there’s been progress toward bridging the gap, solving the problem of “continual learning” remains a grand challenge in AI.

This challenge motivated a team of AI and neuroscience researchers to found ContinualAI, a nonprofit organization and open community of continual and lifelong learning enthusiasts. ContinualAI recently announced Avalanche, a library of tools compiled over the course of a year from over 40 contributors to make continual learning research easier and more reproducible. The group also hosts conference-style presentations, sponsors workshops and AI competitions, and maintains a repository of tutorial, code, and guides. Read More

#human

The Autodidactic Universe

We present an approach to cosmology in which the Universe learns its own physical laws. It does so by exploring a landscape of possible laws, which we express as a certain class of matrix models. We discover maps that put each of these matrix models in correspondence with both a gauge/gravity theory and a mathematical model of a learning machine, such as a deep recurrent, cyclic neural network. This establishes a correspondence between each solution of the physical theory and a run of a neural network.

This correspondence is not an equivalence, partly because gauge theories emerge from N → ∞ limits of the matrix models, whereas the same limits of the neural networks used here are not well-defined.

We discuss in detail what it means to say that learning takes place in autodidactic systems, where there is no supervision. We propose that if the neural network model can be said to learn without supervision, the same can be said for the corresponding physical theory.

We consider other protocols for autodidactic physical systems, such as optimization of graph variety, subset-replication using self-attention and look-ahead, geometrogenesis guided by reinforcement learning, structural learning using renormalization group techniques, and extensions. These protocols together provide a number of directions in which to explore the origin of physical laws based on putting machine learning architectures in correspondence with physical theories. Read More

#artificial-intelligence, #human

How Audio Pros ‘Upmix’ Vintage Tracks and Give Them New Life

Experts are using AI to pick apart classic recordings from the 50s and 60s, isolate the instruments, and stitch them back together in crisp, bold ways.

When James Clarke went to work at London’s legendary Abbey Road Studios in late 2009, he wasn’t an audio engineer. He’d been hired to work as a software programmer. One day not long after he started, he was having lunch with several studio veterans of the 1960s and ’70s, the pre-computer era of music recording when songs were captured on a single piece of tape. To make conversation, Clarke asked a seemingly innocent question: Could you take a tape from the days before multitrack recording and isolate the individual instruments? Could you pull it apart?

The engineers shot him down. It turned into “several hours of the ins and outs of why it’s not possible,” Clarke remembers. You could perform a bit of sonic trickery to transform a song from one-channel mono to two-channel stereo, but that didn’t interest him. Clarke was seeking something more exacting: a way to pick apart a song so a listener could hear just one element at a time. Maybe just the guitar, maybe the drums, maybe the singer.

“I kept saying to them that if the human ear can do it, we can write software to do it as well,” he says. To him, this was a challenge. “I’m from New Zealand. We love proving people wrong.” Read More

#human