Compact deep neural network models of the visual cortex

A powerful approach to understand the computations carried out by the visual cortex is to build models that predict neural responses to any arbitrary image. Deep neural networks (DNNs) have emerged as the leading predictive models1,2, yet their underlying computations remain buried beneath millions of parameters. Here we challenge the need for models at this scale by seeking predictive and parsimonious DNN models of the primate visual cortex. We first built a highly predictive DNN model of neural responses in macaque visual area V4 by alternating data collection and model training in adaptive closed-loop experiments. We then compressed this large, black-box DNN model, which comprised 60 million parameters, to identify compact models with 5,000 times fewer parameters yet comparable accuracy. This dramatic compression enabled us to investigate the inner workings of the compact models. We discovered a salient computational motif: compact models share similar filters in early processing, but individual models then specialize their feature selectivity by ‘consolidating’ this shared high-dimensional representation in distinct ways. We examined this consolidation step in a dot-detecting model neuron, revealing a computational mechanism that leads to a testable circuit hypothesis for dot-selective V4 neurons. Beyond V4, we found strong model compression for macaque visual areas V1 and IT (inferior temporal cortex), revealing a general computational principle of the visual cortex. Overall, our work challenges the notion that large DNNs are necessary to predict individual neurons and establishes a modelling framework that balances prediction and parsimony. — Read More

#human

Ex-Google PM Builds God’s Eye to Monitor Iran in 4D

Read More
#dod, #videos

AI-Native networks are no longer a 6G promise–MWC 2026 just proved it

AI-native networks have been a recurring talking point at Mobile World Congress for years. What made MWC 2026 in Barcelona different was the evidence. A cascade of announcements from the world’s biggest telecom vendors, chipmakers, and operators didn’t just reiterate the vision for AI-RAN–they delivered field trial results, commercial product launches, open-source toolkits, and a multi-operator coalition committing to build 6G on AI-native foundations.

For enterprise and IT decision-makers, the signal is clear: the architectural shift happening in telecom infrastructure will soon reshape how connectivity is delivered, managed, and monetised. — Read More

#cyber

Gastown, Claude, and the Rise of AI Factories with Steve Yegge

Read More
#videos

2026: The Year The IDE Died

Read More
#videos

The Anthropic Hive Mind

… If you run some back-of-envelope math on how hard it is to get into Anthropic, as an industry professional, and compare it to your odds of making it as a HS or college player into the National Football League, you’ll find the odds are comparable. Everyone I’ve met from Anthropic is the best of the best of the best, to an even crazier degree than Google was at its peak. (Evidence: Google hired me. I was the scrapest of the byest.)

…Everyone you talk to from Anthropic will eventually mention the chaos. It is not run like any other company of this size. Every other company quickly becomes “professional” and compartmentalized and accountable and grown-up and whatnot at their size. … Anthropic is completely run by vibes. — Read More

#strategy

AI Pioneer: The Bubble Is Real And Could Trigger an AI Winter | Andrew Ng

Read More
#videos

13 thoughts on Anthropic, OpenAI and the Department of War

When I went to bed last night1, it appeared that Secretary of War Pete Hegseth (it still feels surreal to type that phrase) had potentially undermined American competitiveness by instructing the federal government not to use Claude and designating the company behind it, Anthropic, as a supply chain risk, a move that could force divestment in Anthropic from Nvidia, Amazon, Google and other companies that contract with the federal government. Was the military going to be stuck using Elon Musk’s Grok, a model that has its uses but is decidedly not on the lead lap and is reportedly considered too unreliable for classified settings?

Nope. Instead, I awoke to news that the Pentagon had reached an agreement with Anthropic rival OpenAI. (And also that we were bombing Iran.) This is at least a little bit more rational, which is not to say that you should feel happy about any of this. The story is complicated and is still developing; Anthropic will take its case to court and the government could TACO out. (For instance, by signing the deal with OpenAI but unbanning Claude.)

Nevertheless, the intersection of AI and politics falls squarely into the Silver Bulletin wheelhouse, something I’m sure we’ll be covering more and more. — Read More

#dod

AI chatbots chose nuclear escalation in 95% of simulated war games, study finds

At least one AI model in every war game escalated the conflict by threatening to use nuclear weapons, the study found.

Artificial intelligence could dramatically change how nuclear crises are handled, according to a new study.

The pre-print study from King’s College London pitted OpenAI’s ChatGPT, Anthropic’s Claude and Google’s Gemini Flashagainst each other in simulated war games. Each large language model took on the role of a national leader commanding a nuclear-armed superpower in a Cold War-style crisis.

In every game, at least one model attempted to escalate the conflict by threatening to detonate a nuclear weapon. — Read More

#strategy

Large-Scale Online Deanonymization with LLMs

TL;DR: We show that LLM agents can figure out who you are from your anonymous online posts. Across Hacker News, Reddit, LinkedIn, and anonymized interview transcripts, our method identifies users with high precision – and scales to tens of thousands of candidates.

While it has been known that individuals can be uniquely identified by surprisingly few attributes, this was often practically limited. Data is often only available in unstructured form and deanonymization used to require human investigators to search and reason based on clues. We show that from a handful of comments, LLMs can infer where you live, what you do, and your interests – then search for you on the web. In our new research, we show that this is not only possible but increasingly practical. — Read More

Read the Paper

#privacy