I’ve been in this industry long enough to watch technologies come and go. I’ve seen the excitement around new frameworks, the promises of revolutionary tools, and the breathless predictions about what would “change everything.” Most of the time, these technologies turned out to be incremental improvements wrapped in marketing hyperbole.
But parallel agents? This is different. This is the first time I can say, without any exaggeration, that I’m witnessing technology that will fundamentally transform how we develop software. — Read More
Tag Archives: Human
AGI is an Engineering Problem
We’ve reached an inflection point in AI development. The scaling laws that once promised ever-more-capable models are showing diminishing returns. GPT-5, Claude, and Gemini represent remarkable achievements, but they’re hitting asymptotes that brute-force scaling can’t solve. The path to artificial general intelligence isn’t through training ever-larger language models—it’s through building engineered systems that combine models, memory, context, and deterministic workflows into something greater than their parts.
Let me be blunt: AGI is an engineering problem, not a model training problem. — Read More
Chinese AI Researchers Just Put a Monkey’s Brain on a Computer
This was not on Jane Goodall’s bingo card. With 2 billion neurons, researchers say the DeepSeek-powered Darwin Monkey is a major step toward ‘brain-like intelligence.’
We’re already getting glimpses of AI technology that goes far beyond chatbots to model the brains of living beings.
Chinese researchers say they created an AI version of a monkey’s brain, and put it on a computer. It has 960 chips, and each one “supports over 2 billion spiking neurons and over 100 billion synapses, approaching the number of neurons in a macaque brain,” according to Zhejiang University, as translated by Google.
Researchers named the project the Darwin Monkey and say it’s “a step toward more advanced brain-like intelligence.” It’s the largest brain-like, or “neuromorphic,” computer in the world, and the first that’s based on neuromorphic-specific chips, Interesting Engineering reports. — Read More
Inside a neuroscientist’s quest to cure coma
Locked inside their minds, thousands await a cure. Neuroscientist Daniel Toker is racing to find it.
The study of consciousness is a field crowded with scientists, philosophers, and gurus. But neuroscientist Daniel Toker is focused on its shadow twin: unconsciousness.
His path to this research began with a tragedy — one he witnessed firsthand. While at a music festival, a young concertgoer near Toker dove headfirst into a shallow lake. He quickly surfaced, his body limp and still. Toker, along with others, rushed to help. He performed CPR, but it soon became apparent that the young person’s neck had snapped. There was nothing to be done. — Read More
The Path to Medical Superintelligence
The Microsoft AI team shares research that demonstrates how AI can sequentially investigate and solve medicine’s most complex diagnostic challenges—cases that expert physicians struggle to answer.
Benchmarked against real-world case records published each week in the New England Journal of Medicine, we show that the Microsoft AI Diagnostic Orchestrator (MAI-DxO) correctly diagnoses up to 85% of NEJM case proceedings, a rate more than four times higher than a group of experienced physicians. MAI-DxO also gets to the correct diagnosis more cost-effectively than physicians. — Read More
7 People Now Have Elon Musk’s Neuralink Brain Implant
The brain-computer interface lets those with cervical spinal cord injuries or ALS control a computer with their thoughts. This year, Neuralink has more than doubled the number of patients.
Neuralink has been quietly increasing the number of patients with its N1 brain implant. According to the Barrow Neurological Institute, seven people have now received one. — Read More
Microsoft and OpenAI at Odds Over Future of AGI: A Tech Titans’ Tug-of-War
Microsoft and OpenAI are embroiled in a heated debate over the future of Artificial General Intelligence (AGI). OpenAI’s CEO, Sam Altman, is optimistic about nearing AGI, while Microsoft’s Satya Nadella remains doubtful, suspecting potential manipulation. This disagreement could disrupt their exclusive partnership and reshape the AI landscape. With big stakes in AGI’s advent, the two companies are grappling over contracts, ownership, and tech access, while OpenAI eyes new alliances with rivals like Oracle and Google. — Read More
Does AI Think Like We Do?
Does ChatGPT think like we do? It sounds like one of those questions a five-year-old might ask his dumbstruck parents. Why do you have to know whether Santa is real, honey? Isn’t it enough to get presents on Christmas morning?
Similarly, isn’t it enough that large language models (LLMs) can do amazing things like write code, turn complex technical documents into understandable tutorials, compose music, generate art, and pen an ode to Dunkin’ in the style of Shakespeare? (OK, we’ve all done that last one.) They’re dazzling tools with known limitations and they’re getting better every day. Isn’t that enough? Why does it matter whether what’s under their virtual hoods operates like what’s inside our bony skulls?
Clearly if an LLM can converse and dispense knowledge with the convincing authority of a professor, doctor or lawyer, it seems to be “thinking” in an everyday or instrumental sense. But it might also be an elaborate fake. If you get access to and memorize the answers the day before the test, a perfect score says nothing about your command of the material. Fakery always has limits. — Read More
Large language models for artificial general intelligence (AGI): A survey of foundational principles and approaches
Generative artificial intelligence (AI) systems based on large-scale pretrained foundation models (PFMs) such as vision-language models, large language models (LLMs), diffusion models and vision-language-action (VLA) models have demonstrated the ability to solve complex and truly non-trivial AI problems in a wide variety of domains and contexts. Multimodal large language models (MLLMs), in particular, learn from vast and diverse data sources, allowing rich and nuanced representations of the world and, thereby, providing extensive capabilities, including the ability to reason, engage in meaningful dialog; collaborate with humans and other agents to jointly solve complex problems; and understand social and emotional aspects of humans. Despite this impressive feat, the cognitive abilities of state-of-the-art LLMs trained on large-scale datasets are still superficial and brittle. Consequently, generic LLMs are severely limited in their generalist capabilities. A number of foundational problems —embodiment, symbol grounding, causality and memory — are required to be addressed for LLMs to attain human-level general intelligence. These concepts are more aligned with human cognition and provide LLMs with inherent human-like cognitive properties that support the realization of physically-plausible, semantically meaningful, flexible and more generalizable knowledge and intelligence. In this work, we discuss the aforementioned foundational issues and survey state-of-the art approaches for implementing these concepts in LLMs. Specifically, we discuss how the principles of embodiment, symbol grounding, causality and memory can be leveraged toward the attainment of artificial general intelligence (AGI) in an organic manner. — Read More
Neuralink competitor Paradromics completes first human implant
Neurotech startup Paradromics on Monday announced it has implanted its brain-computer interface in a human for the first time.
The procedure took place May 14 at the University of Michigan with a patient who was already undergoing neurosurgery to treat epilepsy. The company’s technology was implanted and removed from the patient’s brain in about 20 minutes during that surgery.
Paradromics said the procedure demonstrated that its system can be safely implanted and record neural activity. — Read More