Fortunes are made by entrepreneurs and investors when revolutionary technologies enable waves of innovative, investable companies. Think of the railroad, the Bessemer process, electric power, the internal combustion engine, or the microprocessor—each of which, like a stray spark in a fireworks factory, set off decades of follow-on innovations, permeated every part of society, and catapulted a new set of inventors and investors into power, influence, and wealth.
Yet some technological innovations, though societally transformative, generate little in the way of new wealth; instead, they reinforce the status quo. Fifteen years before the microprocessor, another revolutionary idea, shipping containerization, arrived at a less propitious time, when technological advancement was a Red Queen’s race, and inventors and investors were left no better off for non-stop running.
Anyone who invests in the new new thing must answer two questions: First, how much value will this innovation create? And second, who will capture it? Information and communication technology (ICT) was a revolution whose value was captured by startups and led to thousands of newly rich founders, employees, and investors. In contrast, shipping containerization was a revolution whose value was spread so thin that in the end, it made only a single founder temporarily rich and only a single investor a little bit richer.
Is generative AI more like the former or the latter? Will it be the basis of many future industrial fortunes, or a net loser for the investment community as a whole, with a few zero-sum winners here and there? — Read More
Tag Archives: Strategy
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task
This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI’s role in learning. — Read More
RL-as-a-Service will outcompete AGI companies (and that’s good)
Companies drive AI development today. There’s two stories you could tell about the mission of an AI company:
AGI: AI labs will stop at nothing short of Artificial General Intelligence. With enough training and iteration AI will develop a general ability to solve any (feasible) task. We can leverage this general intelligence to solve any problem, including how to make a profit.
Reinforcement Learning-as-a-Service (RLaaS)[1]: AI labs have an established process for training language models to attain high performance on clean datasets. By painstakingly creating benchmarks for problems of interest, they can solve any given problem with RL leveraging language models as a general-purpose prior. This is essentially a version of the CAIS model. — Read More
DOGE’s Flops Shouldn’t Spell Doom for AI In Government
Just a few months after Elon Musk’s retreat from his unofficial role leading the Department of Government Efficiency (DOGE), we have a clearer picture of his vision of government powered by artificial intelligence, and it has a lot more to do with consolidating power than benefitting the public. Even so, we must not lose sight of the fact that a different administration could wield the same technology to advance a more positive future for AI in government.
To most on the American left, the DOGE end game is a dystopic vision of a government run by machines that benefits an elite few at the expense of the people. It includes AI rewriting government rules on a massive scale, salary-free bots replacing human functions and nonpartisan civil service forced to adopt an alarmingly racist and antisemitic Grok AI chatbot built by Musk in his own image. And yet despite Musk’s proclamations about driving efficiency, little cost savings have materialized and few successful examples of automation have been realized. — Read More
Open Global Investment as a Governance Model for AGI
This paper introduces the “open global investment” (OGI) model, a proposed governance framework for artificial general intelligence (AGI) development. The core idea is that AGI development could proceed within one or more corporations in a context that (a) encourages wide international shareholding, (b) reduces the risk of expropriation, (c) implements strengthened corporate governance processes, (d) operates within a government-defined framework for responsible AI development (and/or a public-private partnership), and (e) includes additional international agreements and governance measures to whatever extent is desirable and feasible. We argue that this model, while very imperfect, offers advantages in terms of inclusiveness, incentive compatibility, and practicality compared to prominent alternatives—such as proposals modelled on the Manhattan project, CERN, or Intelsat—especially in scenarios with short AGI timelines. — Read More
How To Become A Mechanistic Interpretability Researcher
Mechanistic interpretability (mech interp) is, in my incredibly biased opinion, one of the most exciting research areas out there. We have these incredibly complex AI models that we don’t understand, yet there are tantalizing signs of real structure inside them. Even partial understanding of this structure opens up a world of possibilities, yet is neglected by 99% of machine learning researchers. There’s so much to do!
think mech interp is an unusually easy field to learn about on your own: there’s a lot of educational materials, you don’t need too much compute, and there’s short feedback loops. But if you’re new, it can feel pretty intimidating to get started. This is my updated guide on how to skill up, get involved, reach the point where you can do actual research, and some advice on how to go from there to a career/academic role in the field.
This guide is deliberately highly opinionated. My goal is to convey a productive mindset and concrete steps that I think will work well, and give a sense of direction, rather than trying to give a fully broad overview or perfect advice. — Read More
Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence
This paper examines changes in the labor market for occupations exposed to generative artificial intelligence using high-frequency administrative data from the largest payroll software provider in the United States. We present six facts that characterize these shifts. We find that since the widespread adoption of generative AI, early-career workers (ages 22-25) in the most AI-exposed occupations have experienced a 13 percent relative decline in employment even after controlling for firm-level shocks. In contrast, employment for workers in less exposed fields and more experienced workers in the same occupations has remained stable or continued to grow. We also find that adjustments occur primarily through employment rather than compensation. Furthermore, employment declines are concentrated in occupations where AI is more likely to automate, rather than augment, human labor. Our results are robust to alternative explanations, such as excluding technology-related firms and excluding occupations amenable to remote work. These six facts provide early, large-scale evidence consistent with the hypothesis that the AI revolution is beginning to have a significant and disproportionate impact on entry-level workers in the American labor market. — Read More
#strategyThe Evidence That AI Is Destroying Jobs For Young People Just Got Stronger
In a moment with many important economic questions and fears, I continue to find this among the more interesting mysteries about the US economy in the long run: Is artificial intelligence already taking jobs from young people?
If you’ve been casually following the debate over AI and its effect on young graduates’ employment, you could be excused for thinking that the answer to that question is “possibly,” or “definitely yes,” or “almost certainly no.”
… To be honest with you, I considered this debate well and truly settled. No, I’d come to think, AI is probably not wrecking employment for young people. But now, I’m thinking about changing my mind again. — Read More
Every Abstraction Is a Door and a Wall: The Hidden Law of Abstraction
TL;DR: Virtualization emerges as the strategy to increase efficiency and achieve feats that physical reality never could. To the point where even our work, friends, and experiences have gone virtual. But what’s the real cost of living in abstractions — and could reality itself be just another layer we can’t see through?
A July 2025 MIT study examined how large language models (LLMs) handle complex, changing information. Researchers tasked AI models with predicting the final arrangement of scrambled digits after a series of moves, without knowing the final result. Transformer models learned to skip explicit simulation of every move. Instead of following state changes step by step, the models organized them into hierarchies, eventually making reasonable predictions.
In other words, the AI developed its own internal “language” of shortcuts to solve the task more efficiently. Does it hint at a broader truth? When faced with complexity, intelligent systems (biological or artificial) seek compressed, virtual representations that capture the essence without expending the energy to simulate every detail. — Read More
Google and Grok are catching up to ChatGPT, says a16z’s latest AI report
ChatGPT rivals like Google’s Gemini, xAI’s Grok, and, to a lesser extent, Meta AI, are closing the gap to ChatGPT, OpenAI’s popular AI chatbot, according to a new report focused on the consumer AI landscape from venture firm Andreessen Horowitz.
The report, in its fifth iteration, showcases two and a half years of data about consumers’ evolving use of AI products.
And for the fifth time, 14 companies appeared on the list of top AI products: ChatGPT, Perplexity, Poe, Character AI, Midjourney, Leonardo, Veed, Cutout, ElevenLabs, Photoroom, Gamma, QuillBot, Civitai, and Hugging Face. — Read More