Introducing warmwind OS : The AI Operating System That Works Smarter

What if your operating system didn’t just run your computer but actively worked alongside you, anticipating your needs, learning your habits, and automating your most tedious tasks? Enter warmwind OS, the world’s first AI-driven operating system, a bold leap into the future of human-computer interaction. Unlike traditional systems that passively wait for your commands, warmwind OS operates as a proactive partner, seamlessly blending into your workflows to eliminate inefficiencies and free up your time for what truly matters. Imagine an OS that not only understands your goals but actively helps you achieve them—this isn’t science fiction; it’s here.

In this introduction of warmwind OS, its development team explain how it redefines the relationship between humans and technology. From its new teaching mode that allows the AI to learn directly from your actions to its ability to integrate with even the most outdated legacy software, this operating system is designed to adapt to your unique needs. Whether you’re looking to streamline customer support, enhance HR processes, or simply reclaim hours lost to repetitive tasks, warmwind OS offers a glimpse into a smarter, more intuitive future. As you read on, consider this: what could you achieve if your technology worked as hard as you do? — Read More

#devops

LiteLLM

LiteLLM is a LLM gateway that lets you Call all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, Groq etc.]. Support for missing a providers or LLM Platform can be requested via a feature request. — Read More

#devops

What can agents actually do?

There’s a lot of excitement about what AI (specifically the latest wave of LLM-anchored AI) can do, and how AI-first companies are different from the prior generations of companies. There are a lot of important and real opportunities at hand, but I find that many of these conversations occur at such an abstract altitude that they border on meaningless. Sort of like saying that your company could be much better if you merely adopted more software. That’s certainly true, but it’s not a particularly helpful claim.

This post is an attempt to concisely summarize how AI agents work, apply that summary to a handful of real-world use cases for AI, and to generally make the case that agents are a multiplier on the quality of your software and system design. If your software or systems are poorly designed, agents will only cause harm. If there’s any meaningful definition of an AI-first company, it must be companies where their software and systems are designed with an immaculate attention to detail.

By the end of this writeup, my hope is that you’ll be well-armed to have a concrete discussion about how LLMs and agents could change the shape of your company, and to avoid getting caught up in the needlessly abstract discussions that are often taking place today. — Read More

#devops

Software engineering with LLMs in 2025: reality check

How are devs at AI startups and in Big Tech using AI tools, and what do they think of them? A broad overview of the state of play in tooling, with Anthropic, Google, Amazon, and others.

LLMs are a new tool for building software that us engineers should become hands-on with. There seems to have been a breakthrough with AI agents like Claude Code in the last few months. Agents that can now “use” the command line to get feedback about suggested changes: and thanks to this addition, they have become much more capable than their predecessors.

As Kent Beck put it in our conversation:

“The whole landscape of what’s ‘cheap’ and what’s ‘expensive’ has shifted.

… It’s time to experiment! — Read More

#devops

Continuous AI in software engineering

When I use AI in my software engineering job, I use it “on tap”: when I have a problem that I’d like to run past the LLM, I go and do that, and then I return to my normal work.

Imagine if we used other software engineering tools like this – for instance when I have a problem that I’d like to solve with unit tests, I go and run the tests, before returning to my normal work. Or suppose when I want to type-check my codebase, I open a terminal and run npm run tsc. Would that be a sensible way of using tests and types?

Of course not. Tests and types, and many other programming tools, are used continuously: instead of a developer deciding to use them, they’re constantly run and checked via automation. Tests run in CI or as a pre-push commit hook. Types are checked on every compile, or even more often via IDE highlighting. A developer can choose to run these tools manually if they want, but they’ll also get value from them over time even if they never consciously trigger them. Having automatic tests and types raises the level of ambient intelligence in the software development lifecycle. — Read More

#devops

Vibe Coding: The Revolutionary Approach Transforming Software Development

“No vibe coding while I’m on call!” declared Jessie Young, Principal Engineer at GitLab, encapsulating the fierce debate dividing the software development world. On one side stand cautious veterans like Brendan Humphreys, CTO of Canva, who insists, “No, you won’t be vibe coding your way to production.” On the other hand, technology giants like Google co-founder Sergey Brin actively encourage engineers to embrace AI-generated code, reporting “10 to 100x speedups” in productivity.

“Vibe coding”—a term coined by AI pioneer Dr. Andrej Karpathy, key architect behind ChatGPT at OpenAI—has rapidly evolved from casual meme to industry-transforming methodology. In their forthcoming book Vibe Coding: Building Production-Grade Software with GenAI, Chat, Agents, and Beyond, technology veterans Gene Kim and Steve Yegge wade into this contentious territory with a bold claim: this isn’t just another development fad but a fundamental paradigm shift that will render traditional manual coding obsolete. — Read More

#devops

snorting the agi with claude code

I was planning to write a nice overview on using claude code for both myself and my teammates. However, the more I experimented with it, the more intrigued I became. So, this is not an introductory article about claude code – Anthropic already released an excellent version of that. Instead:

We will be doing Serious Science™

What does that mean, exactly? Well, some of this is valuable, but other parts are a bit more…experimental, let’s say.

“Sometimes science is more art than science, Morty. A lot of people don’t get that.” – Rick Sanchez

Additionally, I wouldn’t say this is the most budget friendly project. I’m using Claude Max which is $250 a month. I’ll let you decide on how much money you feel comfortable lighting on fire.

Nevertheless, let’s not waste any more time… — Read More

#devops

MCP Explained: The New Standard Connecting AI to Everything

AI agents can write code, summarize reports, even chat like humans — but when it’s time to actually do something in the real world, they stall.

Why? Because most tools still need clunky, one-off integrations.

MCP (Model Context Protocol) changes that. It gives AI agents a simple, standardized way to plug into tools, data, and services — no hacks, no hand-coding.

With MCP, AI goes from smart… to actually useful.Read More

#devops

Attention Wasn’t All We Needed

There’s a lot of modern techniques that have been developed since the original Attention Is All You Need paper. Let’s look at some of the most important ones that have been developed over the years and try to implement the basic ideas as succinctly as possible. We’ll use the Pytorch framework for most of the examples. Note that most of these examples are highly simplified sketches of the core ideas, if you want the full implementation please read the original paper or the production code in frameworks like PyTorch or Jax.

  1. Group Query Attention
  2. Multi-head Latent Attention
  3. Flash Attention
  4. Ring Attention
  5. Pre-normalization
  6. RMSNorm
  7. SwiGLU
  8. Rotary Positional Embedding
  9. Mixture of Experts
  10. Learning Rate Warmup
  11. Cosine Schedule
  12. AdamW Optimizer
  13. Multi-token Prediction
  14. Speculative Decoding

Read More

#devops

Evaluation Driven Development for Agentic Systems.

I have been developing Agentic Systems for around two years now. The same patterns keep emerging again and again, regardless of what kind of systems are being built.

I have learned them the hard way and many do so as well. The first project is not a great success, but you learn from the failures and apply the learnings in the next one. Then you iterate.

Today, I am sharing my system of how to approach development of LLM based applications from idea to production. Use it if you want to avoid painful lessons in your own projects. — Read More

#devops