Channeling his inner Steve Jobs, OpenAI CEO Sam Altman revealed plans on Wednesday to drastically simplify the company’s product lineup, merging its scattered collection of AI models into a single unified system.
… Echoing Jobs’s famous catchphrase, Altman tweeted, “We want AI to ‘just work’ for you; we realize how complicated our model and product offerings have gotten.” — Read More
Tag Archives: Strategy
Deep Research and Knowledge Value
“When did you feel the AGI?”
This is a question that has been floating around AI circles for a while, and it’s a hard one to answer for two reasons. First, what is AGI, and second, “feel” is a bit like obscenity: as Supreme Court Justice Potter Stewart famously said in Jacobellis v. Ohio, “I know it when I see it.”
I gave my definition of AGI in AI’s Uneven Arrival: …My definition of AGI is that it can be ammunition, i.e. it can be given a task and trusted to complete it at a good-enough rate (my definition of Artificial Super Intelligence (ASI) is the ability to come up with the tasks in the first place).
The “feel” part of that question is a more recent discovery: DeepResearch from OpenAI feels like AGI; I just got a new employee for the shockingly low price of $200/month. — Read More
Researchers created an open rival to OpenAI’s o1 ‘reasoning’ model for under $50
AI researchers at Stanford and the University of Washington were able to train an AI “reasoning” model for under $50 in cloud compute credits, according to a new research paper released last Friday.
The model, known as s1, performs similarly to cutting-edge reasoning models, such as OpenAI’s o1 and DeepSeek’s R1, on tests measuring math and coding abilities. The s1 model is available on GitHub, along with the data and code used to train it. — Read More
Why the AI world is suddenly obsessed with a 160-year-old economics paradox
Last week, news spread that a Chinese AI company, DeepSeek, had built a cutting-edge chatbot at a fraction of the cost of its American competitors. It sent the stock prices of American tech companies plummeting.
But Microsoft CEO Satya Nadella put a happy spin on the whole episode, citing a 160-year-old economics concept to suggest that this was good news.
“Jevons paradox strikes again!” Nadella wrote on social media, sharing the concept’s Wikipedia page. “As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can’t get enough of.” — Read More
It’s time to come to grips with AI
We live in interesting times. On Monday morning, tech stocks plunged on investor shock and awe over DeepSeek, a Chinese AI company that has built — I’m leaving out a lot of details — an open-source large language model (LLM) that performs competitively with name brands like ChatGPT at a fraction of the computing cost.
Meanwhile, two stories got buried in the avalanche of activity by President Trump last week. Trump rescinded a Biden executive order on AI safety. And he announced Stargate, a nine-figure AI joint venture aimed at entrenching American AI competitiveness, which has triggered a feud between Elon Musk and Sam Altman, the frenemy cofounders of OpenAI.
These stories will have far bigger geopolitical implications than, say, Musk’s choice of hand gestures. They may even mark an inflection point where the world has decided to charge forward with AI at full speed, for better or worse. — Read More
Deep-learning enabled generalized inverse design of multi-port radio-frequency and sub-terahertz passives and integrated circuits
Millimeter-wave and terahertz integrated circuits and chips are expected to serve as the backbone for future wireless networks and high resolution sensing. However, design of these integrated circuits and chips can be quite complex, requiring years of human expertise, careful tailoring of hand crafted circuit topologies and co-design with parameterized and pre-selected templates of electromagnetic structures. These structures (radiative and non-radiative, single-port and multi-ports) are subsequently optimized through ad-hoc methods and parameter sweeps. Such bottom-up approaches with pre-selected regular topologies also fundamentally limit the design space. Here, we demonstrate a universal inverse design approach for arbitrary-shaped complex multi-port electromagnetic structures with designer radiative and scattering properties, co-designed with active circuits. To allow such universalization, we employ deep learning based models, and demonstrate synthesis with several examples of complex mm-Wave passive structures and end-to-end integrated mm-Wave broadband circuits. The presented inverse design methodology, that produces the designs in minutes, can be transformative in opening up a new, previously inaccessible design space. — Read More
Google DeepMind CEO: AI-Designed Drugs Coming to Clinical Trials in 2025
Nobel laureate and Google DeepMind CEO Demis Hassabis said Tuesday (Jan. 21) that he expects to see pharmaceutical drugs designed by artificial intelligence (AI) to be in clinical trials by the end of the year.
During a fireside chat at the World Economic Forum in Davos, Switzerland, Hassabis said these drugs are being developed at Isomorphic Labs, a for-profit venture created by Google parent firm Alphabet in 2021 that was tasked to reinvent the entire drug discovery process based on first principles and led by AI.
“That’s the plan,” Hassabis said. — Read More
AI Founder’s Bitter Lesson. Chapter 1 – History Repeats Itself
- Historically, general approaches always win in AI.
- Founders in AI application space now repeat the mistakes AI researchers made in the past.
- Better AI models will enable general purpose AI applications. At the same time, the added value of the software around the AI model will diminish.
Recent AI progress has enabled new products that solve a broad range of problems. I saw this firsthand watching over 100 pitches during YC alumni Demo Day. These problems share a common thread – they’re simple enough to be solved with constrained AI. Yet the real power of AI lies in its flexibility. While products with fewer constraints generally work better, current AI models aren’t reliable enough to build such products at scale. We’ve been here before with AI, many times. Each time, the winning move has been the same. AI founders need to learn this history, or I fear they’ll discover these lessons the hard way. — Read More
What Does OpenAI’s Sam Altman Mean When He Says AGI is Achievable?
Sam Altman started 2025 with a bold declaration: OpenAI has figured out how to create artificial general intelligence (AGI), a term commonly understood as the point where AI systems can comprehend, learn, and perform any intellectual task that a human can.
In a reflective blog post published over the weekend, he also said the first wave of AI agents could join the workforce this year, marking what he describes as a pivotal moment in technological history. — Read More
Is OpenAI o3 Really AGI?
The world may have changed, and we might not have realized it yet.
Yesterday, OpenAI shocked (and this is not hyperbole) everyone with the announcement of OpenAI o3 and o3-mini, the brand new models of the ‘o’ family (they skipped ‘o2’ due to trademark reasons).
o3 results are so astonishing that some people are actually convinced that it is AGI, as it destroys some of the so-called ‘impossible’ benchmarks for current models. — Read More