‘Periodic table’ for AI methods aims to drive innovation

Artificial intelligence is increasingly used to integrate and analyze multiple types of data formats, such as text, images, audio and video. One challenge slowing advances in multimodal AI, however, is the process of choosing the algorithmic method best aligned to the specific task an AI system needs to perform.

Scientists have developed a unified view of AI methods aimed at systemizing this process. The Journal of Machine Learning Research published the new framework for deriving algorithms, developed by physicists at Emory University. — Read More

#standards

Beyond Language Modeling: An Exploration of Multimodal Pretraining

The visual world offers a critical axis for advancing foundation models beyond language. Despite growing interest in this direction, the design space for native multimodal models remains opaque. We provide empirical clarity through controlled, from-scratch pretraining experiments, isolating the factors that govern multimodal pretraining without interference from language pretraining. We adopt the Transfusion framework, using next-token prediction for language and diffusion for vision, to train on diverse data including text, video, image-text pairs, and even action-conditioned video. Our experiments yield four key insights: (i) Representation Autoencoder (RAE) provides an optimal unified visual representation by excelling at both visual understanding and generation; (ii) visual and language data are complementary and yield synergy for downstream capabilities; (iii) unified multimodal pretraining leads naturally to world modeling, with capabilities emerging from general training; and (iv) Mixture-of-Experts (MoE) enables efficient and effective multimodal scaling while naturally inducing modality specialization. Through IsoFLOP analysis, we compute scaling laws for both modalities and uncover a scaling asymmetry: vision is significantly more data-hungry than language. We demonstrate that the MoE architecture harmonizes this scaling asymmetry by providing the high model capacity required by language while accommodating the data-intensive nature of vision, paving the way for truly unified multimodal models. — Read More

#training

Compact deep neural network models of the visual cortex

A powerful approach to understand the computations carried out by the visual cortex is to build models that predict neural responses to any arbitrary image. Deep neural networks (DNNs) have emerged as the leading predictive models1,2, yet their underlying computations remain buried beneath millions of parameters. Here we challenge the need for models at this scale by seeking predictive and parsimonious DNN models of the primate visual cortex. We first built a highly predictive DNN model of neural responses in macaque visual area V4 by alternating data collection and model training in adaptive closed-loop experiments. We then compressed this large, black-box DNN model, which comprised 60 million parameters, to identify compact models with 5,000 times fewer parameters yet comparable accuracy. This dramatic compression enabled us to investigate the inner workings of the compact models. We discovered a salient computational motif: compact models share similar filters in early processing, but individual models then specialize their feature selectivity by ‘consolidating’ this shared high-dimensional representation in distinct ways. We examined this consolidation step in a dot-detecting model neuron, revealing a computational mechanism that leads to a testable circuit hypothesis for dot-selective V4 neurons. Beyond V4, we found strong model compression for macaque visual areas V1 and IT (inferior temporal cortex), revealing a general computational principle of the visual cortex. Overall, our work challenges the notion that large DNNs are necessary to predict individual neurons and establishes a modelling framework that balances prediction and parsimony. — Read More

#human

Ex-Google PM Builds God’s Eye to Monitor Iran in 4D

Read More
#dod, #videos

AI-Native networks are no longer a 6G promise–MWC 2026 just proved it

AI-native networks have been a recurring talking point at Mobile World Congress for years. What made MWC 2026 in Barcelona different was the evidence. A cascade of announcements from the world’s biggest telecom vendors, chipmakers, and operators didn’t just reiterate the vision for AI-RAN–they delivered field trial results, commercial product launches, open-source toolkits, and a multi-operator coalition committing to build 6G on AI-native foundations.

For enterprise and IT decision-makers, the signal is clear: the architectural shift happening in telecom infrastructure will soon reshape how connectivity is delivered, managed, and monetised. — Read More

#cyber

Gastown, Claude, and the Rise of AI Factories with Steve Yegge

Read More
#videos

2026: The Year The IDE Died

Read More
#videos

The Anthropic Hive Mind

… If you run some back-of-envelope math on how hard it is to get into Anthropic, as an industry professional, and compare it to your odds of making it as a HS or college player into the National Football League, you’ll find the odds are comparable. Everyone I’ve met from Anthropic is the best of the best of the best, to an even crazier degree than Google was at its peak. (Evidence: Google hired me. I was the scrapest of the byest.)

…Everyone you talk to from Anthropic will eventually mention the chaos. It is not run like any other company of this size. Every other company quickly becomes “professional” and compartmentalized and accountable and grown-up and whatnot at their size. … Anthropic is completely run by vibes. — Read More

#strategy

AI Pioneer: The Bubble Is Real And Could Trigger an AI Winter | Andrew Ng

Read More
#videos

13 thoughts on Anthropic, OpenAI and the Department of War

When I went to bed last night1, it appeared that Secretary of War Pete Hegseth (it still feels surreal to type that phrase) had potentially undermined American competitiveness by instructing the federal government not to use Claude and designating the company behind it, Anthropic, as a supply chain risk, a move that could force divestment in Anthropic from Nvidia, Amazon, Google and other companies that contract with the federal government. Was the military going to be stuck using Elon Musk’s Grok, a model that has its uses but is decidedly not on the lead lap and is reportedly considered too unreliable for classified settings?

Nope. Instead, I awoke to news that the Pentagon had reached an agreement with Anthropic rival OpenAI. (And also that we were bombing Iran.) This is at least a little bit more rational, which is not to say that you should feel happy about any of this. The story is complicated and is still developing; Anthropic will take its case to court and the government could TACO out. (For instance, by signing the deal with OpenAI but unbanning Claude.)

Nevertheless, the intersection of AI and politics falls squarely into the Silver Bulletin wheelhouse, something I’m sure we’ll be covering more and more. — Read More

#dod