No Code Is Dead

Once again, the software development landscape is experiencing another big shift. After years of drag-and-drop, no-code platforms democratizing app creation, generative AI (GenAI) is eliminating the need for no-code platforms in many cases.

Mind you, I said “no code” not “low code” — there are key differences. (More on this later.)

GenAI has introduced the ability for nontechnical users to use natural language to build apps just by telling the system what they want done. Call it “vibe coding” — the ability to describe what you want and watch AI generate working applications, or whatever. But will this new paradigm enhance existing no-code tools or render them obsolete?

I sought out insights from industry veterans to explore this pivotal question, revealing a broad spectrum of perspectives on where the intersection of AI and visual development is heading. — Read More

#devops

The hidden cost of AI reliance

I want to be clear: I’m a software engineer who uses LLMs ‘heavily’ in my daily work. They have undeniably been a good productivity tool, helping me solve problems and tackle projects faster. This post isn’t about how we should reject LLMs and progress but rather my reflection on what we might be losing in our haste to embrace them.

The rise of AI coding assistants has brought in what many call a new age of productivity. LLMs excel at several key areas that genuinely improve developer workflows: writing isolated functions; scaffolding boilerplate code like test cases, configuration files, explaining unfamiliar code or complex algorithms, generating documentation and comments, and helping with syntax in unfamiliar languages or frameworks. These capabilities allow us to work ‘faster’.

But beneath this image of enhanced efficiency, I find myself wondering if there’s a more troubling affect: Are we trading our hard-earned intelligence for short-term convenience?Read More

#devops

Andrew Ng: Building Faster with AI

Read More

#videos

Inside a neuroscientist’s quest to cure coma

Locked inside their minds, thousands await a cure. Neuroscientist Daniel Toker is racing to find it.

The study of consciousness is a field crowded with scientists, philosophers, and gurus. But neuroscientist Daniel Toker is focused on its shadow twin: unconsciousness.

His path to this research began with a tragedy — one he witnessed firsthand. While at a music festival, a young concertgoer near Toker dove headfirst into a shallow lake. He quickly surfaced, his body limp and still. Toker, along with others, rushed to help. He performed CPR, but it soon became apparent that the young person’s neck had snapped. There was nothing to be done. — Read More

#human

Introducing warmwind OS : The AI Operating System That Works Smarter

What if your operating system didn’t just run your computer but actively worked alongside you, anticipating your needs, learning your habits, and automating your most tedious tasks? Enter warmwind OS, the world’s first AI-driven operating system, a bold leap into the future of human-computer interaction. Unlike traditional systems that passively wait for your commands, warmwind OS operates as a proactive partner, seamlessly blending into your workflows to eliminate inefficiencies and free up your time for what truly matters. Imagine an OS that not only understands your goals but actively helps you achieve them—this isn’t science fiction; it’s here.

In this introduction of warmwind OS, its development team explain how it redefines the relationship between humans and technology. From its new teaching mode that allows the AI to learn directly from your actions to its ability to integrate with even the most outdated legacy software, this operating system is designed to adapt to your unique needs. Whether you’re looking to streamline customer support, enhance HR processes, or simply reclaim hours lost to repetitive tasks, warmwind OS offers a glimpse into a smarter, more intuitive future. As you read on, consider this: what could you achieve if your technology worked as hard as you do? — Read More

#devops

Lean and Mean: How We Fine-Tuned a Small Language Model for Secret Detection in Code

We fine-tuned a small language model (Llama 3.2 1B) for detecting secrets in code, achieving 86% precision and 82% recall—significantly outperforming traditional regex-based methods. Our approach addresses the limitations of both regex patterns (limited context understanding) and large language models (high computational costs and privacy concerns) by creating a lean, efficient model that can run on standard CPU hardware. This blog post details our journey from data preparation to model training and deployment, demonstrating how Small Language Models can solve specific cybersecurity challenges without the overhead of massive LLMs. 

This research is now one of Wiz’s core Secret Security efforts, adding fast, accurate secret detection as part of our solution. — Read More

#cyber

‘Positive review only’: Researchers hide AI prompts in papers

Research papers from 14 academic institutions in eight countries — including Japan, South Korea and China — contained hidden prompts directing artificial intelligence tools to give them good reviews, Nikkei has found.

Nikkei looked at English-language preprints — manuscripts that have yet to undergo formal peer review — on the academic research platform arXiv.

It discovered such prompts in 17 articles, whose lead authors are affiliated with 14 institutions including Japan’s Waseda University, South Korea’s KAIST, China’s Peking University and the National University of Singapore, as well as the University of Washington and Columbia University in the U.S. Most of the papers involve the field of computer science. — Read More

#fake

LiteLLM

LiteLLM is a LLM gateway that lets you Call all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, Groq etc.]. Support for missing a providers or LLM Platform can be requested via a feature request. — Read More

#devops

Why I don’t think AGI is right around the corner

Sometimes people say that even if all AI progress totally stopped, the systems of today would still be far more economically transformative than the internet. I disagree. I think the LLMs of today are magical. But the reason that the Fortune 500 aren’t using them to transform their workflows isn’t because the management is too stodgy. Rather, I think it’s genuinely hard to get normal humanlike labor out of LLMs. And this has to do with some fundamental capabilities these models lack.

I like to think I’m “AI forward” here at the Dwarkesh Podcast. I’ve probably spent over a hundred hours trying to build little LLM tools for my post production setup. And the experience of trying to get them to be useful has extended my timelines. I’ll try to get the LLMs to rewrite autogenerated transcripts for readability the way a human would. Or I’ll try to get them to identify clips from the transcript to tweet out. Sometimes I’ll try to get them to co-write an essay with me, passage by passage. These are simple, self contained, short horizon, language in-language out tasks – the kinds of assignments that should be dead center in the LLMs’ repertoire. And they’re 5/10 at them. Don’t get me wrong, that’s impressive.

But the fundamental problem is that LLMs don’t get better over time the way a human would. The lack of continual learning is a huge huge problem. The LLM baseline at many tasks might be higher than an average human’s. But there’s no way to give a model high level feedback. You’re stuck with the abilities you get out of the box. You can keep messing around with the system prompt. In practice this just doesn’t produce anything even close to the kind of learning and improvement that human employees experience. — Read More

#strategy

What can agents actually do?

There’s a lot of excitement about what AI (specifically the latest wave of LLM-anchored AI) can do, and how AI-first companies are different from the prior generations of companies. There are a lot of important and real opportunities at hand, but I find that many of these conversations occur at such an abstract altitude that they border on meaningless. Sort of like saying that your company could be much better if you merely adopted more software. That’s certainly true, but it’s not a particularly helpful claim.

This post is an attempt to concisely summarize how AI agents work, apply that summary to a handful of real-world use cases for AI, and to generally make the case that agents are a multiplier on the quality of your software and system design. If your software or systems are poorly designed, agents will only cause harm. If there’s any meaningful definition of an AI-first company, it must be companies where their software and systems are designed with an immaculate attention to detail.

By the end of this writeup, my hope is that you’ll be well-armed to have a concrete discussion about how LLMs and agents could change the shape of your company, and to avoid getting caught up in the needlessly abstract discussions that are often taking place today. — Read More

#devops