Chinese AI Researchers Just Put a Monkey’s Brain on a Computer

This was not on Jane Goodall’s bingo card. With 2 billion neurons, researchers say the DeepSeek-powered Darwin Monkey is a major step toward ‘brain-like intelligence.’

We’re already getting glimpses of AI technology that goes far beyond chatbots to model the brains of living beings.

Chinese researchers say they created an AI version of a monkey’s brain, and put it on a computer. It has 960 chips, and each one “supports over 2 billion spiking neurons and over 100 billion synapses, approaching the number of neurons in a macaque brain,” according to Zhejiang University, as translated by Google.

Researchers named the project the Darwin Monkey and say it’s “a step toward more advanced brain-like intelligence.” It’s the largest brain-like, or “neuromorphic,” computer in the world, and the first that’s based on neuromorphic-specific chips, Interesting Engineering reports. — Read More

#human

Inside a neuroscientist’s quest to cure coma

Locked inside their minds, thousands await a cure. Neuroscientist Daniel Toker is racing to find it.

The study of consciousness is a field crowded with scientists, philosophers, and gurus. But neuroscientist Daniel Toker is focused on its shadow twin: unconsciousness.

His path to this research began with a tragedy — one he witnessed firsthand. While at a music festival, a young concertgoer near Toker dove headfirst into a shallow lake. He quickly surfaced, his body limp and still. Toker, along with others, rushed to help. He performed CPR, but it soon became apparent that the young person’s neck had snapped. There was nothing to be done. — Read More

#human

The Path to Medical Superintelligence

The Microsoft AI team shares research that demonstrates how AI can sequentially investigate and solve medicine’s most complex diagnostic challenges—cases that expert physicians struggle to answer.

Benchmarked against real-world case records published each week in the New England Journal of Medicine, we show that the Microsoft AI Diagnostic Orchestrator (MAI-DxO) correctly diagnoses up to 85% of NEJM case proceedings, a rate more than four times higher than a group of experienced physicians. MAI-DxO also gets to the correct diagnosis more cost-effectively than physicians. — Read More

#human

7 People Now Have Elon Musk’s Neuralink Brain Implant

The brain-computer interface lets those with cervical spinal cord injuries or ALS control a computer with their thoughts. This year, Neuralink has more than doubled the number of patients.

Neuralink has been quietly increasing the number of patients with its N1 brain implant. According to the Barrow Neurological Institute, seven people have now received one. — Read More

#human

Microsoft and OpenAI at Odds Over Future of AGI: A Tech Titans’ Tug-of-War

Microsoft and OpenAI are embroiled in a heated debate over the future of Artificial General Intelligence (AGI). OpenAI’s CEO, Sam Altman, is optimistic about nearing AGI, while Microsoft’s Satya Nadella remains doubtful, suspecting potential manipulation. This disagreement could disrupt their exclusive partnership and reshape the AI landscape. With big stakes in AGI’s advent, the two companies are grappling over contracts, ownership, and tech access, while OpenAI eyes new alliances with rivals like Oracle and Google. — Read More

#human

Does AI Think Like We Do?

Does ChatGPT think like we do? It sounds like one of those questions a five-year-old might ask his dumbstruck parents. Why do you have to know whether Santa is real, honey? Isn’t it enough to get presents on Christmas morning?

Similarly, isn’t it enough that large language models (LLMs) can do amazing things like write code, turn complex technical documents into understandable tutorials, compose music, generate art, and pen an ode to Dunkin’ in the style of Shakespeare? (OK, we’ve all done that last one.) They’re dazzling tools with known limitations and they’re getting better every day. Isn’t that enough? Why does it matter whether what’s under their virtual hoods operates like what’s inside our bony skulls?

Clearly if an LLM can converse and dispense knowledge with the convincing authority of a professor, doctor or lawyer, it seems to be “thinking” in an everyday or instrumental sense. But it might also be an elaborate fake. If you get access to and memorize the answers the day before the test, a perfect score says nothing about your command of the material. Fakery always has limits. — Read More

#human

Large language models for artificial general intelligence (AGI): A survey of foundational principles and approaches

Generative artificial intelligence (AI) systems based on large-scale pretrained foundation models (PFMs) such as vision-language models, large language models (LLMs), diffusion models and vision-language-action (VLA) models have demonstrated the ability to solve complex and truly non-trivial AI problems in a wide variety of domains and contexts. Multimodal large language models (MLLMs), in particular, learn from vast and diverse data sources, allowing rich and nuanced representations of the world and, thereby, providing extensive capabilities, including the ability to reason, engage in meaningful dialog; collaborate with humans and other agents to jointly solve complex problems; and understand social and emotional aspects of humans. Despite this impressive feat, the cognitive abilities of state-of-the-art LLMs trained on large-scale datasets are still superficial and brittle. Consequently, generic LLMs are severely limited in their generalist capabilities. A number of foundational problems —embodiment, symbol grounding, causality and memory — are required to be addressed for LLMs to attain human-level general intelligence. These concepts are more aligned with human cognition and provide LLMs with inherent human-like cognitive properties that support the realization of physically-plausible, semantically meaningful, flexible and more generalizable knowledge and intelligence. In this work, we discuss the aforementioned foundational issues and survey state-of-the art approaches for implementing these concepts in LLMs. Specifically, we discuss how the principles of embodiment, symbol grounding, causality and memory can be leveraged toward the attainment of artificial general intelligence (AGI) in an organic manner. — Read More

#human

Neuralink competitor Paradromics completes first human implant

Neurotech startup Paradromics on Monday announced it has implanted its brain-computer interface in a human for the first time.

The procedure took place May 14 at the University of Michigan with a patient who was already undergoing neurosurgery to treat epilepsy. The company’s technology was implanted and removed from the patient’s brain in about 20 minutes during that surgery.

Paradromics said the procedure demonstrated that its system can be safely implanted and record neural activity. — Read More

#human

Human Brain Cells on a Chip for Sale: World-first biocomputing platform hits the market 

In a development straight out of science fiction, Australian startup Cortical Labs has released what it calls the world’s first code-deployable biological computer. The CL1, which debuted in March, fuses human brain cells on a silicon chip to process information via sub-millisecond electrical feedback loops.

Designed as a tool for neuroscience and biotech research, the CL1 offers a new way to study how brain cells process and react to stimuli. Unlike conventional silicon-based systems, the hybrid platform uses live human neurons capable of adapting, learning, and responding to external inputs in real time. — Read More

#human

Why do people disagree about when powerful AI will arrive?

Few would argue that AI progress over the past few years has not been rapid. 

Large Language Models (LLMs) have provided an unexpected path to increasingly general capabilities. In 2019, OpenAI’s GPT-2 struggled to write a coherent paragraph. In 2025, LLMs write fluent essays, outcompete human experts at graduate-level science questions, and excel at competition mathematics and coding. The most advanced multi-modal AI models now produce images and video that are hard to distinguish from reality. 

These models are impressive (and useful!) but they still fall short of the north star that frontier AI companies are working towards. Artificial General Intelligence (AGI), which OpenAI describes as “a highly autonomous system that outperforms humans at most economically valuable work” has been the ultimate ambition of AI researchers for many decades. 

Most experts agree that AGI is possible. They also agree that it will have transformative consequences. There is less consensus about what these consequences will be. Some believe AGI will usher in an age of radical abundance. Others believe it will likely lead to human extinction. One thing we can be sure of is that a post-AGI world would look very different to the one we live in today. 

So, is AGI just around the corner? Or are there still hard problems in front of us that will take decades to crack, despite the speed of recent progress? This is a subject of live debate. Ask various groups when they think AGI will arrive and you’ll get very different answers, ranging from just a couple of years to more than two decades. 

Why is this? We’ve tried to pin down some core disagreements.  — Read More

#human