Exclusive: Connectome Pioneer Sebastian Seung Is Building A Digital Brain

On a Sunday evening earlier this month, a Stanford professor held a salon at her home near the university’s campus. The main topic for the event was “synthesizing consciousness through neuroscience,” and the home filled with dozens of people, including artificial intelligence researchers, doctors, neuroscientists, philosophers and a former monk, eager to discuss the current collision between new AI and biological tools and how we might identify the arrival of a digital consciousness.

The opening speaker for the salon was Sebastian Seung, and this made a lot of sense. Seung, a neuroscience and computer science professor at Princeton University, has spent much of the last year enjoying the afterglow of his (and others’) breakthrough research describing the inner workings of the fly brain. Seung, you see, helped create the first complete wiring diagram of a fly brain and its 140,000 neurons and 55 million synapses. (Nature put out a special issue last October to document the achievement and its implications.) This diagram, known as a connectome, took more than a decade to finish and stands as the most detailed look at the most complex whole brain ever produced.

… What Seung did not reveal to the audience is that the fly connectome has given rise to his own new neuroscience journey. This week, he’s unveiling a start-up called Memazing, as we can exclusively report. The new company seeks to create the technology needed to reverse engineer the fly brain (and eventually even more complex brains) and create full recreations – or emulations, as Seung calls them – of the brain in software. — Read More

#human

Disney’s OpenAI deal is exclusive for just one year — then it’s open season

Disney’s three-year licensing partnership with OpenAI includes just one of exclusivity, Disney CEO Bob Iger told CNBC. The company signed the partnership with OpenAI last week that will bring its iconic characters to the AI firm’s Sora video generator. Once that exclusive year is up, Disney is free to sign similar deals with other AI companies.

The deal gives OpenAI a high-profile content partner, allowing users to draw on more than 200 characters from Disney, Marvel, Pixar, and Star Wars to create content on Sora. For now, it’s the only AI platform that’s legally permitted to do so. — Read More

#vfx

The party’s AI: How China’s new AI systems are reshaping human rights

… China’s extensive AI-powered visual surveillance systems are already well documented. This report reveals new ways that the Chinese Communist Party (CCP) is using large language models (LLMs) and other AI systems to automate censorship, enhance surveillance and pre-emptively suppress dissent.

… AI-powered technology is widening the power differential between China’s state-supported companies operating abroad and foreign populations—further enabling some Chinese companies to systematically violate the economic rights of vulnerable groups outside China, despite Beijing’s claims that China respects the development rights and sovereignty of other countries.

The risks to other countries are clear. China is already the world’s largest exporter of AI-powered surveillance technology; new surveillance technologies and platforms developed in China are also not likely to simply stay there. — Read More

#china-ai

AI agents are starting to eat SaaS

We spent fifteen years watching software eat the world. Entire industries got swallowed by software – retail, media, finance – you name it, there has been incredible disruption over the past couple of decades with a proliferation of SaaS tooling. This has led to a huge swath of SaaS companies – valued, collectively, in the trillions.

In my last post debating if the cost of software has dropped 90% with AI coding agents I mainly looked at the supply side of the market. What will happen to demand for SaaS tooling if this hypothesis plays out? I’ve been thinking a lot about these second and third order effects of the changes in software engineering.

The calculus on build vs buy is starting to change. Software ate the world. Agents are going to eat SaaS. — Read More

#strategy

Economics of Orbital vs Terrestrial Data Centers

Before we get nerd sniped by the shiny engineering details, ask the only question that matters. Why compute in orbit? Why should a watt or a flop 250 miles up be more valuable than one on the surface? What advantage justifies moving something as mundane as matrix multiplication into LEO?

That “why” is almost missing from the public conversation. People jump straight to hardware and hand-wave the business case, as if the economics are self-evident. They aren’t. A lot of the energy here is FOMO and aesthetic futurism, not a grounded value proposition.

… This is all to say that the current discourse is increasingly bothering me due to the lack of rigor. — Read More

#strategy

NVIDIA Debuts Nemotron 3 Family of Open Models

The Nemotron 3 family of open models — in Nano, Super and Ultra sizes — introduces the most efficient family of open models with leading accuracy for building agentic AI applications.

Nemotron 3 Nano delivers 4x higher throughput than Nemotron 2 Nano and delivers the most tokens per second for multi-agent systems at scale through a breakthrough hybrid mixture-of-experts architecture.

Nemotron achieves superior accuracy from advanced reinforcement learning techniques with concurrent multi-environment post-training at scale.

NVIDIA is the first to release a collection of state-of-the-art open models, training datasets and reinforcement learning environments and libraries for building highly accurate, efficient, specialized AI agents. — Read More

#nvidia

If a Meta AI model can read a brain-wide signal, why wouldn’t the brain?

Did you know migratory birds and sea turtles are able to navigate using the Earth’s magnetic field? It’s called magnetoreception. Basically, being able to navigate was evolutionarily advantageous, so life evolved ways to feel the Earth’s magnetic field. A LOT of ways. Like a shocking amount of ways.

It would seem evolution adores detecting magnetic fields. And it makes sense! A literal “sense of direction” is quite useful in staying alive – nearly all life benefits from it, including us.

We don’t totally understand how our magnetoreception works yet, but we know that it does. In 2019, some Caltech researchers put some people in a room shielded from the Earth’s magnetic field, with a big magnetic field generator in it. They hooked them up to an EEG, and watched what happened in their brains as they manipulated the magnetic field. The result: some of those people showed a response to the magnetic fields on the EEG!

That gets my noggin joggin. Our brain responds to magnetic field changes, but we aren’t aware of it? What if it affects our mood? Would you believe me if I told you lunar gravity influences the Earth’s magnetosphere? Perhaps I was too dismissive of astrology. — Read More

#human

Video App Zoom Shows Surprising Result By Topping Humanity’s Last Exam Benchmark, Beats Gemini 3 Pro

Topping AI benchmarks are usually thought to be the preserve of the top four AI frontier labs, but a surprising new name has emerged on the Humanity’s Last Exam benchmark.

Zoom, the video conferencing platform, has announced it achieved a state-of-the-art score of 48.1% on the Humanity’s Last Exam (HLE) full-set benchmark, surpassing Google’s Gemini 3 Pro with tools, which previously held the top position at 45.8%. The 2.3 percentage point improvement marks a significant achievement for a company better known for video calls than AI research. — Read More

#performance

Google Translate now lets you hear real-time translations in your headphones

Google is rolling out a beta experience that lets you hear real-time translations in your headphones, the company announced on Friday. The tech giant is also bringing advanced Gemini capabilities to Google Translate and expanding its language-learning tools in the Translate app.

The new real-time headphone translations experience keeps each speaker’s tone, emphasis, and cadence intact, so it’s easier to follow the conversation and tell who’s saying what, Google says. The new capability essentially turns any pair of headphones into a real-time, one-way translation device. — Read More

#audio

The Looming Existential Crisis of AI

… [AI] is not a flood, and it’s not even a tsunami. It’s a landslide, and it will not recede.

I have come to two conclusions. First, no matter how great or terrible you think AI may be, engagement is not an option. You will adapt or you will die. How quickly or how slowly is anyone’s guess. This conclusion came through an experience, one I will describe below.

Second, AI will not destroy us. Instead, we will destroy ourselves, as we give up our minds, our agency, and our social institutions to AI control.

The future is not Orwellian; it is Huxleyan. — Read More

#vfx