In a dark MRI scanner outside Tokyo, a volunteer watches a video of someone hurling themselves off a waterfall. Nearby, a computer digests the brain activity pulsing across millions of neurons. A few moments later, the machine produces a sentence: “A person jumps over a deep water fall on a mountain ridge.”
No one typed those words. No one spoke them. They came directly from the volunteer’s brain activity.
That’s the startling premise of “mind captioning,” a new method developed by Tomoyasu Horikawa and colleagues at NTT Communication Science Laboratories in Japan. Published this week in Science Advances, the system uses a blend of brain imaging and artificial intelligence to generate textual descriptions of what people are seeing — or even visualizing with their mind’s eye — based only on their neural patterns. — Read More
Daily Archives: November 8, 2025
Google’s Ironwood TPUs represent a bigger threat than Nvidia would have you believe
Look out, Jensen! With its TPUs, Google has shown time and time again that it’s not the size of your accelerators that matters but how efficiently you can scale them in production.
Now with its latest generation of Ironwood accelerators slated for general availability in the coming weeks, the Chocolate Factory not only has scale on its side but a tensor processing unit (TPU) with the grunt to give Nvidia’s Blackwell behemoths a run for their money. — Read More
Kimi K2 Thinking
Today, we are introducing KimiK2Thinking, our best open-source thinking model.
Built as a thinking agent, it reasons step by step while using tools, achieving state-of-the-art performance on Humanity’s Last Exam (HLE), BrowseComp, and other benchmarks, with major gains in reasoning, agentic search, coding, writing, and general capabilities.
… K2 Thinking is now live on kimi.com under the chat mode [1], with its full agentic mode available soon. —Read More