Novel Technique to Detect Cloud Threat Actor Operations

Cloud-based alerting systems often struggle to distinguish between normal cloud activity and targeted malicious operations by known threat actors. The difficulty doesn’t lie in an inability to identify complex alerting operations across thousands of cloud resources or in a failure to follow identity resources, the problem lies in the accurate detection of known persistent threat actor group techniques specifically within cloud environments.

In this research, we hypothesize how a new method of alert analysis could be used to improve detection. Specifically, we look at cloud-based alerting events and their mapping to the MITRE ATT&CK® tactics and techniques they represent. We believe that we can show a correlation between threat actors and the types of techniques they use, which will trigger specific types of alerting events within victim environments. This distinct, detectable pattern could be used to identify when a known threat actor group compromises an organization. — Read More

#cyber

How I Use Claude Code

I’ve been using Claude Code as my primary development tool for approx 9 months, and the workflow I’ve settled into is radically different from what most people do with AI coding tools. Most developers type a prompt, sometimes use plan mode, fix the errors, repeat. The more terminally online are stitching together ralph loops, mcps, gas towns (remember those?), etc. The results in both cases are a mess that completely falls apart for anything non-trivial.

The workflow I’m going to describe has one core principle: never let Claude write code until you’ve reviewed and approved a written plan. This separation of planning and execution is the single most important thing I do. It prevents wasted effort, keeps me in control of architecture decisions, and produces significantly better results with minimal token usage than jumping straight to code. — Read More

#devops

Software stocks crater as independent research piece details potential AI dystopian scenario

Software stocks are getting shellacked as a post published by Citrini Research and Lotus Technology Management managing partner Alap Shah has sharpened attention on the magnitude and breadth of losers from the AI boom.

The piece, titled “The 2028 Global Intelligence Crisis,” is a hypothetical scenario analysis exploring the left-tail risks in two years’ time in a world where there’s an aggressive AI build-out and adoption of AI agents. — Read More

#singularity

Detecting and preventing distillation attacks

We have identified industrial-scale campaigns by three AI laboratories—DeepSeek, Moonshot, and MiniMax—to illicitly extract Claude’s capabilities to improve their own models. These labs generated over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts, in violation of our terms of service and regional access restrictions

These labs used a technique called “distillation,” which involves training a less capable model on the outputs of a stronger one. Distillation is a widely used and legitimate training method. — Read More

Andrej Karpathy Just Built an Entire GPT in 243 Lines of Python

I’ve read many transformer implementations during my PhD. Dense codebases. Thousands of files. Dependencies stacked on top of dependencies. You open a repo, run pip install -r requirements.txt, and watch 400 packages download before you can even see your model train (than errors , dependency issues … etc.).

Then on February 11, 2026, Andrej Karpathy dropped a single Python file that trains and runs a GPT from scratch. 243 lines. Zero dependencies. — Read More

#training