The artificial intelligence revolution will be only three years old at the end of November. Think about that for a moment. In just 36 months AI has gone from great-new-toy, to global phenomenon, to where we are today – debating whether we are in one of the biggest technology bubbles or booms in modern times.
To us what’s happening is obvious. We both covered the internet bubble 25 years ago. We’ve been writing about – and in Om’s case investing in – technology since then. We can both say unequivocally that the conversations we are having now about the future of AI feel exactly like the conversations we had about the future of the internet in 1999.
We’re not only in a bubble but one that is arguably the biggest technology mania any of us have ever witnessed. — Read More
Recent Updates Page 11
AI Prompt Engineering Course – Prompt Engineering Beginner COMPLETE Guide and for PROS (2025)
The Space of Intelligence is Large (Andrej Karpathy)
Something I think people continue to have poor intuition for: The space of intelligences is large and animal intelligence (the only kind we’ve ever known) is only a single point, arising from a very specific kind of optimization that is fundamentally distinct from that of our technology. — Read More
#humanThe First AI (foreign) English Teacher “Takes Office”: The Encounter Between Human Children and Artificial Intelligence
Today’s children are true “AI natives.” They are born and raised in the AI era; interacting with the digital world is an innate instinct. The AI entities that provide them with education must also be immersive, interactive, personalized, and warm.
The birth of AI English teacher Jessica heralds the future of education: no longer one-way knowledge transmission, but rather the natural acquisition of a communication ability to face the world through symbiosis and dialogue with AI. She possesses a vast amount of knowledge, boundless patience, a memory capable of remembering every child’s situation, and a warm heart—a true “super-teacher.” — Read More
Hitchhiker’s Guide to Attack Surface Management
I first heard about the word “ASM” (i.e., Attack Surface Management) probably in late 2018, and I thought it must be some complex infrastructure for tracking assets of an organization. Looking back, I realize I almost had a similar stack for discovering, tracking, and detecting obscure assets of organizations, and I was using it for my bug hunting adventures. I feel my stack was kinda goated, as I was able to find obscure assets of Apple, Facebook, Shopify, Twitter, and many other Fortune 100 companies, and reported hundreds of bugs, all through automation.
… If I search “Guide to ASM” on Internet, almost none of the supposed guides are real resources. They funnel you to their own ASM solution, and the guide is just present there to provide you with some surface-level information, and is mostly a marketing gimmick. This is precisely why I decided to write something.
This guide will provide you with insights into exactly how big your attack surface really is. CISOs can look at it and see if their organizations have all of these covered, security researchers and bug hunters can look at this and maybe find new ideas related to where to look during recon. Devs can look at it and see if they are unintentionally leaving any door open for hackers. If you are into security, it has something to offer you. — Read More
OpenAI CEO Sam Altman’s big warning to employees in his leaked memo: ‘Google has been doing excellent…’
OpenAI CEO Sam Altman has conceded that the company is facing “rough vibes” and “economic headwinds” just days after Google reclaimed the AI performance crown with Google’s Gemini 3 Pro launch. According to The Information, a leaked memo from last month starkly contrasts with Altman’s public trillion-dollar ambitions and he reportedly warned employees that revenue growth could plummet to single digits by 2026. — Read More
OpenAI “models” are a Mockery of the Century
Compared to models such as DeepSeek, Qwen, and many others
Here is my prompt I submitted to Qwen3–235B-Think-CS model (this is but one exemplar of how competitors surpass OpenAI big time in common sense reasoning):
I have Lenovo t470s with windows 10 pro. I plugged in Lexar 32GB card in it but it is not recognized neither in windows explorer nor device manager. I restarted laptop but same thing. I ran Lenovo Vantage, shows latest updates are in, but still Lexar not recognized. Ran Microsoft Lenovo x64 hardware troubleshooter, rebooted, but still lexar not recognized, like it does not exist?!
See this beautiful reasoning this engine provided, free of charge of course (I used Poe aggregate to access this and many other AI engines, open source and commercial): — Read More
OpenAI can’t beat Google in consumer AI
OpenAI can’t beat Google at consumer AI, as long as we are in the “chatbot” paradigm. Clock’s ticking for OpenAI to pull a rabbit out of the hat asap (in December). It’s worrisome that OpenAI’s best effort at front-running the Gemini 3 release was with GPT-5.1, which was barely an improvement. Most importantly, Google has much cheaper inference COGs than OpenAI due to its vertical AI integration (with TPUs) and scale. That allows Google to commoditize whatever OpenAI puts out, making monetization impossible.
… Google’s data advantage, especially in multi-modal, is really shining. Because Google’s so strong in multi-modal, Gemini 3 just destroyed Sonnet 4.5 in frontend UI coding (which is a visual task). Little things like this makes Google hard to beat, because OpenAI can’t synthetically generate every type of data for training, e.g.Youtube or Google Maps. — Read More
AI Eats the World
Continuous Thought Machines
Biological brains demonstrate complex neural activity, where neural dynamics are critical to how brains process information. Most artificial neural networks ignore the complexity of individual neurons. We challenge that paradigm. By incorporating neuron-level processing and synchronization, we reintroduce neural timing as a foundational element. We present the Continuous Thought Machine (CTM), a model designed to leverage neural dynamics as its core representation. The CTM has two innovations: (1) neuron-level temporal processing, where each neuron uses unique weight parameters to process incoming histories; and (2) neural synchronization as a latent representation. The CTM aims to strike a balance between neuron abstractions and biological realism. It operates at a level of abstraction that effectively captures essential temporal dynamics while remaining computationally tractable. We demonstrate the CTM’s performance and versatility across a range of tasks, including solving 2D mazes, ImageNet-1K classification, parity computation, and more. Beyond displaying rich internal representations and offering a natural avenue for interpretation owing to its internal process, the CTM is able to perform tasks that require complex sequential reasoning. The CTM can also leverage adaptive compute, where it can stop earlier for simpler tasks, or keep computing when faced with more challenging instances. The goal of this work is to share the CTM and its associated innovations, rather than pushing for new state-of-the-art results. To that end, we believe the CTM represents a significant step toward developing more biologically plausible and powerful artificial intelligence systems. We provide an accompanying interactive online demonstration at this https URL and an extended technical report at this https URL . — Read More