“No vibe coding while I’m on call!” declared Jessie Young, Principal Engineer at GitLab, encapsulating the fierce debate dividing the software development world. On one side stand cautious veterans like Brendan Humphreys, CTO of Canva, who insists, “No, you won’t be vibe coding your way to production.” On the other hand, technology giants like Google co-founder Sergey Brin actively encourage engineers to embrace AI-generated code, reporting “10 to 100x speedups” in productivity.
“Vibe coding”—a term coined by AI pioneer Dr. Andrej Karpathy, key architect behind ChatGPT at OpenAI—has rapidly evolved from casual meme to industry-transforming methodology. In their forthcoming book Vibe Coding: Building Production-Grade Software with GenAI, Chat, Agents, and Beyond, technology veterans Gene Kim and Steve Yegge wade into this contentious territory with a bold claim: this isn’t just another development fad but a fundamental paradigm shift that will render traditional manual coding obsolete. — Read More
Recent Updates Page 62
Writing in the Age of LLMs
In the last couple of years, I’ve written and reviewed several technical papers and blog posts. I often come across LLM-generated writing that feels slightly “off”—sometimes, to be honest, even uninviting. At the same time, I get tremendous value from using LLMs to draft early versions, summarize dense material, and rephrase messy thoughts.
This post details some of my thoughts on writing in a world where much of what we read is now machine-generated. First, I’ll lay out some common patterns of bad writing I see from LLM tools. Then, I’ll defend some writing habits that people often dismiss as “LLM-sounding” but are actually fine—even helpful—when used intentionally. Finally, I’ll share concrete rules and formulas I rely on in my own writing and in the prompts I use to guide LLMs. — Read More
Midjourney launches its first AI video generation model, V1
Midjourney, one of the most popular AI image generation startups, announced on Wednesday the launch of its much-anticipated AI video generation model, V1.
V1 is an image-to-video model, in which users can upload an image — or take an image generated by one of Midjourney’s other models — and V1 will produce a set of four five-second videos based on it. Much like Midjourney’s image models, V1 is only available through Discord, and it’s only available on the web at launch. — Read More
The Role of AI and Compliance in Modern Risk Management: ShowMeCon 2025
When people think of St. Louis, it’s often the Gateway Arch or the Cardinals that come to mind. Just across the Missouri River is one of the “Show Me” state’s oldest European settlements, dating back to 1769, St. Charles. Front just a stone’s throw from where Lewis and Clark set off on their famous expedition, something more than baseball statistics, historical trivia, or architectural wonders was being discussed in early June: security, compliance, and risk, at ShowMeCon 2025.
Around 400 practitioners gathered for two full days of sessions, villages, and a CTF run by MetaCTF. There was much discussion of the industry’s distinction between controls, policies, and security. A general theme emerged that real security demands context, rigor, and adaptive posture, not just checking the box.Here are just a few highlights from the 2025 edition of ShowMeCon. — Read More
Real-Time Action Chunking with Large Models
Unlike chatbots or image generators, robots must operate in real time. While a robot is “thinking”, the world around it evolves according to physical laws, so delays between inputs and outputs have a tangible impact on performance. For a language model, the difference between fast and slow generation is a satisfied or annoyed user; for a vision-language-action model (VLA), it could be the difference between a robot handing you a hot coffee or spilling it in your lap. While VLAs have achieved promising results in open-world generalization, they can be slow to run. Like their cousins in language and vision, these models have billions of parameters and require heavy-duty GPUs. On edge devices like mobile robots, that adds even more latency for network communication between a centralized inference server and the robot. — Read More
How not to lose your job to AI
Around half of people are worried they’ll lose their job to AI.1 And they’re right to be concerned: AI can now complete real-world coding tasks on GitHub, generate photorealistic video, drive a taxi more safely than humans, and do accurate medical diagnosis.2 And it’s set to continue to improve rapidly.
But what’s less appreciated is that, while AI drives down the value of skills it can do, it drives up the value of skills it can’t— because they become the bottlenecks to further automation (for a while at least). As I’ll explain, ATMs actually increased employment of bank tellers — until online banking finished the job.
Your best strategy is to learn the skills that AI will make more valuable, trying to ride the wave one step ahead of automation. — Read More
Starting a Security Program from Scratch (or re-starting)
I’ve had a number of requests to write a post about how to start and grow a new security program – or a substantial reassessment and rebuild of an existing program.
This is a difficult one to write because, as you all know, there is no one size fits all approach. Starting from scratch in a 10 person startup is very different from (re-)building a security program in a more established organization. What I’ve tried to do here, instead, is to develop a framework and step by step guide to apply to pretty much any type of organization. It might be that in applying this you only need, for your risk and stage of development, to go halfway in the various steps. Some time later, as your organization grows in size, stature or criticality then you might need to do the whole thing.
There are 4 phases of maturity each with their own steps. But basically it’s all about (1) start facing in the right direction, (2) getting the basics done, (3) making those basics more routine / sustainable and then, if you need to (4) making it much more advanced / strategic. — Read More
What’s Next in AI?
[T]oday, we are examining the latest research from Google, Cohere, Apple, MIT, Mistral, NVIDIA, and more to determine what the incumbents are most excited about and what breakthroughs will matter in the coming months.
It’s an honest, hard look at what AI is (and isn’t) currently, and, by the end, you will be fully up-to-date with the industry on a deeper level hardly attained by almost anyone. — Read More
Google DeepMind’s Logan Kilpatrick says AGI will be a product experience. Not a model.
His bet: whoever nails memory + context around decent model at a product level wins. Users will suddenly feel like they’re talking to AGI. — Read More
Will AI take your job? The answer could hinge on the 4 S’s of the technology’s advantages over humans
If you’ve worried that AI might take your job, deprive you of your livelihood, or maybe even replace your role in society, it probably feels good to see the latest AI tools fail spectacularly. If AI recommends glue as a pizza topping, then you’re safe for another day.
But the fact remains that AI already has definite advantages over even the most skilled humans, and knowing where these advantages arise — and where they don’t — will be key to adapting to the AI-infused workforce.
AI will often not be as effective as a human doing the same job. It won’t always know more or be more accurate. And it definitely won’t always be fairer or more reliable. But it may still be used whenever it has an advantage over humans in one of four dimensions: speed, scale, scope and sophistication. Understanding these dimensions is the key to understanding AI-human replacement. — Read More