Why IP address truncation fails at anonymization

You’ve probably seen it in analytics dashboards, server logs, or privacy documentation: IP addresses with their last octet zeroed out. 192.168.1.42 becomes 192.168.1.0. For IPv6, maybe the last 64 or 80 bits are stripped. This practice is widespread, often promoted as “GDPR-compliant pseudonymization,” and implemented by major analytics platforms, log aggregation services, and web servers worldwide.

There’s just one problem: truncated IP addresses are still personal data under GDPR.

If you’re using IP address truncation thinking it makes data “anonymous” or “non-personal,” you’re creating a false sense of security. European data protection authorities, including the French CNIL, Italian Garante, and Austrian DPA, have repeatedly ruled that truncated IPs remain personal data, especially when combined with other information.

This is a fundamental misunderstanding of what constitutes effective anonymization. — Read More

#cyber

Introducing vibe coding in Google AI Studio

We’ve been building a better foundation for AI Studio, and this week we introduced a totally new AI powered vibe coding experience in Google AI Studio. This redesigned experience is meant to take you from prompt to working AI app in minutes without you having to juggle with API keys, or figuring out how to tie models together. — Read More

#devops

Stress-testing model specs reveals character differences among language models

We generate over 300,000 user queries that trade-off value-based principles in model specifications. Under these scenarios, we observe distinct value prioritization and behavior patterns in frontier models from Anthropic, OpenAI, Google DeepMind and xAI. Our experiments also uncovered thousands of cases of direct contradictions or interpretive ambiguities within the model spec.Read More

Paper

#performance

Maximizing the Value of Indicators of Compromise and Reimagining Their Role in Modern Detection

Have we become so focused on TTPs that we’ve dismissed the value at the bottom of the pyramid? This post explores what role IOC’s have in a modern detection program if any, and what the future may look like for them.

You’d be hard-pressed to find a detection engineer who doesn’t know the Pyramid of Pain[1]. It, along with MITRE ATT&CK[2], really solidified the argument for prioritizing behavioral detections. I know I’ve used it to make that exact point many times.

Lately, though, I’ve wondered if we’ve pushed its lesson too far. Have we become so focused on TTPs that we’ve dismissed the value at the bottom of the pyramid? The firehose of indicators is a daily reality, and it’s time our detection strategies caught up by exploring a more pragmatic approach to their effectiveness, their nuances, and how to get the most value out of the time we are required to spend on them. — Read More

#cyber

Code like a surgeon

A lot of people say AI will make us all “managers” or “editors”…but I think this is a dangerously incomplete view!

Personally, I’m trying to code like a surgeon.

A surgeon isn’t a manager, they do the actual work! But their skills and time are highly leveraged with a support team that handles prep, secondary tasks, admin. The surgeon focuses on the important stuff they are uniquely good at.

My current goal with AI coding tools is to spend 100% of my time doing stuff that matters.  — Read More

#devops

Thinking Like a Data Engineer

I thought becoming a data engineer meant mastering tools. Instead, it meant learning how to see. I thought the hardest part would be learning the tools — Hadoop, Spark, SQL optimization, and distributed processing. Over time, I realized the real challenge wasn’t technical. It was learning how to think.

Learning to think like a data engineer — to see patterns in chaos, to connect systems to human behavior, to balance simplicity and scale — is a slow process of unlearning, observing, and reimagining. I didn’t get there through courses or certifications. I got there through people.

Four mentors, in four different moments of my life, unknowingly gave me lessons that shaped how I approach engineering, leadership, and even life. Each taught me something not about data, but about thinking systems.

What follows isn’t a tutorial. It’s a map of how four people — and their lessons — rewired how I think. — Read More

#data-science

OpenAI reportedly developing new generative music tool

OpenAI is working on a new tool that would generate music based on text and audio prompts, according to a report in The Information.

… One source told The Information that OpenAI is working with some students from the Juilliard School to annotate scores as a way to provide training data. — Read More

#audio

Through the Looking Glass: Stephen Klein’s Quest to Make AI Think Before It Speaks

“Agentic AI is 100% Non-Sense Designed To Scare You Into Spending Money on Consulting.”

That was the hook of a LinkedIn post designed to ruffle feathers in the AI world. It was bold, direct, and very Stephen Klein.

… While most of Silicon Valley is busy building AI companies designed to automate and replace jobs, all in the pursuit of profit, Stephen is purposely, loudly, going against the grain.

He is the founder of Curiouser.ai, a startup building the world’s first strategic AI coach, Alice, designed not to answer your questions, but to ask them.

And not just any questions, thought-provoking, Socratic, destabilizing questions.

Welcome to Alice in Wonderland. And Stephen, like a modern-day Lewis Carroll, is inviting us to question everything. — Read More

#strategy

Verbalized Sampling: How to Mitigate Mode Collapse and Unlock LLM Diversity

Post-training alignment often reduces LLM diversity, leading to a phenomenon known as mode collapse. Unlike prior work that attributes this effect to algorithmic limitations, we identify a fundamental, pervasive data-level driver: typicality bias in preference data, whereby annotators systematically favor familiar text as a result of well-established findings in cognitive psychology. We formalize this bias theoretically, verify it on preference datasets empirically, and show that it plays a central role in mode collapse. Motivated by this analysis, we introduce Verbalized Sampling, a simple, training-free prompting strategy to circumvent mode collapse. VS prompts the model to verbalize a probability distribution over a set of responses (e.g., “Generate 5 jokes about coffee and their corresponding probabilities”). Comprehensive experiments show that VS significantly improves performance across creative writing (poems, stories, jokes), dialogue simulation, open-ended QA, and synthetic data generation, without sacrificing factual accuracy and safety. For instance, in creative writing, VS increases diversity by 1.6-2.1x over direct prompting. We further observe an emergent trend that more capable models benefit more from VS. In sum, our work provides a new data-centric perspective on mode collapse and a practical inference-time remedy that helps unlock pre-trained generative diversity. — Read More

#performance

How I Would Restart My Cybersecurity Career in 2025

Read More

#cyber, #videos