OpenAI “models” are a Mockery of the Century

Compared to models such as DeepSeek, Qwen, and many others

Here is my prompt I submitted to Qwen3–235B-Think-CS model (this is but one exemplar of how competitors surpass OpenAI big time in common sense reasoning):

I have Lenovo t470s with windows 10 pro. I plugged in Lexar 32GB card in it but it is not recognized neither in windows explorer nor device manager. I restarted laptop but same thing. I ran Lenovo Vantage, shows latest updates are in, but still Lexar not recognized. Ran Microsoft Lenovo x64 hardware troubleshooter, rebooted, but still lexar not recognized, like it does not exist?!

See this beautiful reasoning this engine provided, free of charge of course (I used Poe aggregate to access this and many other AI engines, open source and commercial): — Read More

#chatbots

OpenAI can’t beat Google in consumer AI

OpenAI can’t beat Google at consumer AI, as long as we are in the “chatbot” paradigm. Clock’s ticking for OpenAI to pull a rabbit out of the hat asap (in December). It’s worrisome that OpenAI’s best effort at front-running the Gemini 3 release was with GPT-5.1, which was barely an improvement. Most importantly, Google has much cheaper inference COGs than OpenAI due to its vertical AI integration (with TPUs) and scale. That allows Google to commoditize whatever OpenAI puts out, making monetization impossible.

Google’s data advantage, especially in multi-modal, is really shining. Because Google’s so strong in multi-modal, Gemini 3 just destroyed Sonnet 4.5 in frontend UI coding (which is a visual task). Little things like this makes Google hard to beat, because OpenAI can’t synthetically generate every type of data for training, e.g.Youtube or Google Maps. — Read More

#big7

AI Eats the World

Twice a year, Benedict Evans produces a big presentation exploring macro and strategic trends in the tech industry. New in November 2025, ‘AI eats the world’.

This post includes the slides for that presentation as well as videos for Evans’ presentations on YouTube. — Read More

#strategy

Continuous Thought Machines

Biological brains demonstrate complex neural activity, where neural dynamics are critical to how brains process information. Most artificial neural networks ignore the complexity of individual neurons. We challenge that paradigm. By incorporating neuron-level processing and synchronization, we reintroduce neural timing as a foundational element. We present the Continuous Thought Machine (CTM), a model designed to leverage neural dynamics as its core representation. The CTM has two innovations: (1) neuron-level temporal processing, where each neuron uses unique weight parameters to process incoming histories; and (2) neural synchronization as a latent representation. The CTM aims to strike a balance between neuron abstractions and biological realism. It operates at a level of abstraction that effectively captures essential temporal dynamics while remaining computationally tractable. We demonstrate the CTM’s performance and versatility across a range of tasks, including solving 2D mazes, ImageNet-1K classification, parity computation, and more. Beyond displaying rich internal representations and offering a natural avenue for interpretation owing to its internal process, the CTM is able to perform tasks that require complex sequential reasoning. The CTM can also leverage adaptive compute, where it can stop earlier for simpler tasks, or keep computing when faced with more challenging instances. The goal of this work is to share the CTM and its associated innovations, rather than pushing for new state-of-the-art results. To that end, we believe the CTM represents a significant step toward developing more biologically plausible and powerful artificial intelligence systems. We provide an accompanying interactive online demonstration at this https URL and an extended technical report at this https URL . — Read More

#human

Aging as a disease: The rise of longevity science

In October, the Roots of Progress Institute organized Progress Conference 2025 to connect people and ideas in the progress movement.

In this dispatch, medical historian Laura Mazer explores the conference’s longevity track, where researchers, economists, and entrepreneurs shared new ways to extend not just lifespan, but healthspan. 

She finds that the frontier of medicine is shifting — from fighting disease to pursuing more life itself. — Read More

#human

The Network is the Product: Data Network Flywheel, Compound Through Connection

The value of a data product is never contained within its boundaries. It emerges from the number, quality, and friction of its connections, and the signals from its produce. Connectivity is the architecture that turns isolated signals into coordinated intelligence. The mistake most teams make is assuming insight comes from accumulation, when in reality it comes from interaction. — Read More

#data-science

OOP: the worst thing that happened to programming 

In this article, we will try to understand why OOP is the worst thing that happened to programming, how it became so popular, why experienced Java (C#, C++, etc.) programmers can’t really be considered great engineers, and why code in Java cannot be considered good.

Unfortunately, programming is quite far from being a science (just like me), so many terms can be interpreted differently. — Read More

#devops

Towards interplanetary QUIC traffic

Have you ever asked yourself which protocols get used when downloading pictures from the Perseverance Mars rover to Earth? I hadn’t thought about that either, until I came across an intriguing message on the internet, back in April 2024:

I’m looking for someone knowledgeable of quic/quinn to help us out for our deep space IP project. Would be of part-time consulting. Please dm me if interested.

The message itself is quite short and somewhat jargon-y, so it took me a few readings to fully realize what the project was about:

— Working with QUIC: an internet protocol for reliable communication (i.e., what we typically use TCP for).
— Working with Quinn: the most popular Rust implementation of the QUIC protocol.
— Using QUIC to communicate between Earth and computers that are far, far away (e.g., other planets).

Business was going well on my end, and I didn’t have much time to dedicate to another consulting engagement, but… How could I say no to an interplanetary internet project? I had contributed to Quinn in the past1, so I felt well-equipped to help out and decided to actually do it. This article provides a record of the adventure so far. — Read More

#devops

The Bitter Lessons

The United States and China are often said to be in a “race” with one another with respect to artificial intelligence. In a sense this is true, but the metaphor manages to miss almost all that is interesting about US-China dynamics in emerging technology. Today I’d like to offer some brief thoughts about how I see this “race” and where it might be headed.

All metaphors are lossy approximations of reality. But “race” is an especially inapt metaphor for this context. A race is a competition with clear boundaries and a clearly defined finish line. There are no such luxuries to be found here. Beyond the rhyme, “the Space Race” made intuitive sense because the objective was clear: landing humans on the Moon.

Stating that there is an “AI race” underway invites the obvious follow-up question: the AI race to where? And no one—not you, not me, not OpenAI, not the U.S. government, and not the Chinese government—knows where we are headed. — Read More

#china-vs-us

Andrew Ng: LLMs as the Next Geopolitical Weapon & Do Margins Still Matter in AI?

Read More

#videos