Master Any Skill Faster With an AI Learning System

You can learn almost anything online.

So why does it still feel slow?

Most “learning” is simply the collection of information. Tabs. Notes. Videos. Highlights.

But skill only grows when you do three things again and again:

Try → Get feedback → Try again.

AI can make that loop faster — if you use it like a system, not a chat. — Read More

#training

Top 10 YouTube Channels for Learning AI in 2026

Around 2.5 billion people used YouTube in January 2025, and a decent chunk of them are trying to figure out this whole AI thing. The platform has quietly become the best place to learn artificial intelligence without spending thousands on courses or going back to school. You can find everything from mathematical breakdowns to practical coding tutorials, and most of it is actually free.

The problem is not finding content but finding good content. YouTube is full of channels that either oversimplify to the point of being useless or overcomplicate to the point where you need a PhD to follow along. After watching dozens of hours of AI tutorials and checking what people are actually recommending in 2026, I put together this list of ten channels that actually teach you something useful. — Read More

#training

AI Tried to Replace Software Engineers — Here’s What Actually Happened

Every few months, we hear the same prediction:
“Software engineers will be obsolete in 6 to 12 months.”

This time, the warning came with a bold experiment.

The Cursor team — backed by billions in venture capital — decided to prove that AI agents could replace engineers. Instead of just talking about it, they launched a real test:
Hundreds of AI agents working nonstop for a week to build a web browser from scratch.

Building a browser is one of the hardest engineering challenges in modern software. Even Microsoft struggled with it for years. So if AI could pull this off, it would be a huge milestone.

But what happened next tells a very different story. Read More

#devops

A Guide to Which AI to Use in the Agentic Era

I have written eight of these guides since ChatGPT came out, but this version represents a very large break with the past, because what it means to “use AI” has changed dramatically. Until a few months ago, for the vast majority of people, “using AI” meant talking to a chatbot in a back-and-forth conversation. But over the past few months, it has become practical to use AI as an agent: you can assign them to a task and they do them, using tools as appropriate. Because of this change, you have to consider three things when deciding what AI to use: Models, Apps, and Harnesses. Models are the underlying AI brains; Apps are the products you actually use to talk to a model, and Harnesses are what let the power of AI models do real work. Until recently, you didn’t have to know this. 

It means that the question “which AI should I use?” has gotten harder to answer, because the answer now depends on what you’re trying to do with it. So let me walk through the landscape. — Read More

#devops

AI Found Twelve New Vulnerabilities in OpenSSL

The title of the post is”What AI Security Research Looks Like When It Works,” and I agree:

In the latest OpenSSL security release> on January 27, 2026, twelve new zero-day vulnerabilities (meaning unknown to the maintainers at time of disclosure) were announced. Our AI system is responsible for the original discovery of all twelve, each found and responsibly disclosed to the OpenSSL team during the fall and winter of 2025. Of those, 10 were assigned CVE-2025 identifiers and 2 received CVE-2026 identifiers. Adding the 10 to the three we already found in the Fall 2025 release, AISLE is credited for surfacing 13 of 14 OpenSSL CVEs assigned in 2025, and 15 total across both releases. This is a historically unusual concentration for any single research team, let alone an AI-driven one. — Read More

#cyber

Why I don’t think AI is a bubble

Most of the people I like think AI is a bubble. This is a tricky topic to discuss, because the “bubble” framing couples financial and technical issues. It’s like a sports fan debating “Is this player overrated?“. The answer depends on how good you think that player is, and how good you think other people think they are.

I don’t have anything much to add to the financial part of the “AI bubble” conversation. Various equity prices are based on very optimistic estimates about how AI will progress. This post is about the technological question. I’ll leave it to you to judge what sort of forecast any given asset price actually represents.

The main case I want to make is that performance probably won’t plateau — or at least, the common arguments for why it will plateau don’t add up.  — Read More

#strategy

BCIs in 2026: Still Janky, Still Dangerous, Still Overhyped

Alright, another year, another batch of venture capital pouring into ‘mind-reading’ startups that promise to turn your thoughts into Twitter threads. Frankly, it’s exhausting. We’re in 2026, and the fundamental problems that plagued Brain-Computer Interfaces (BCIs) a decade ago are still here, just wearing slightly shinier packaging. If you think we’re anywhere near seamless neural integration that lets you control a prosthetic arm with the fluidity of a natural limb, or hell, even reliably type at 60 WPM purely by thinking, you’ve been mainlining too much techbro hype. Let’s pull back the curtain on this circus, shall we? Because from an engineering perspective, most of what you hear is, generously, aspirational fiction. — Read More

#human

The Promptware Kill Chain

Attacks against modern generative artificial intelligence (AI) large language models (LLMs) pose a real threat. Yet discussions around these attacks and their potential defenses are dangerously myopic. The dominant narrative focuses on “prompt injection,” a set of techniques to embed instructions into inputs to LLM intended to perform malicious activity. This term suggests a simple, singular vulnerability. This framing obscures a more complex and dangerous reality. Attacks on LLM-based systems have evolved into a distinct class of malware execution mechanisms, which we term “promptware.” In a new paper, we, the authors, propose a structured seven-step “promptware kill chain” to provide policymakers and security practitioners with the necessary vocabulary and framework to address the escalating AI threat landscape. — Read More

#cyber

Why I don’t think AGI is imminent

The CEOs of OpenAI and Anthropic have both claimed that human-level AI is just around the corner — and at times, that it’s already here. These claims have generated enormous public attention. There has been some technical scrutiny of these claims, but critiques rarely reach the public discourse. This piece is a sketch of my own thinking about the boundary of transformer-based large language models and human-level cognition. I have an MS degree in Machine Learning from over a decade ago, and I don’t work in the field of AI currently, but I am well-read on the underlying research. If you know more than I do about these topics, please reach out and let me know, I would love to develop my thinking on this further. — Read More

#strategy

The “AI Kills SaaS” Take Is Lazy. Here’s What’s Actually Happening.

HubSpot’s revenue is up 19%.
Xero is up 23%.
Atlassian is up 23%.
Figma is growing at 40%.
Adobe added another 11% to hit $23.8 billion.

And every single one of their stock prices has been absolutely destroyed this year.

Here is HubSpot. Currently at $228 down from a 52 week high of $881

…So what’s going on? The popular take is simple: AI has arrived, SaaS is dead, pack it up. … I wanted to go deeper. — Read More

#strategy