Tilly Norwood | Take The Lead (Official Music Video)

Read More
#videos

#vfx

Open Weights isn’t Open Training

When I was in college, my data structures professor told a story. It went something like this:

“When I was your age, I received an assignment, and encountered an inexplicable bug. I debugged and debugged and found that adding a print statement resolved the bug. I was young like all of you, and I was certain I’d found a bug in the C compiler. Turns out the problem was me.”

The takeaway was clear: if you have a bug, it’s your fault.

This is a good heuristic for most cases, but with open source ML infrastructure, you need to throw this advice out the window. There might be features that appear to be supported but are not. If you’re suspicious about an operation or stage that’s taking a long time, it may be implemented in a way that’s efficient enough…for an 8B model, not a 1T+ one. HuggingFace is good, but it’s not always correct. Libraries have dependencies, and problems can hide several layers down the stack. Even Pytorch isn’t ground truth.

Over the past couple months, I worked on developing infrastructure to post-train and serve models cheaply. Ultimately, my team decided to develop a custom training codebase, but only after I spent a few days attempting to use existing open-source options. The following is an account of my successes and failures and what it means for open-weights models. — Read More

#training

How to scan for vulnerabilities with GitHub Security Lab’s open source AI-powered framework

For the last few months, we’ve been using the GitHub Security Lab Taskflow Agent along with a new set of auditing taskflows that specialize in finding web security vulnerabilities. They also turn out to be very successful at finding high-impact vulnerabilities in open source projects.

As security researchers, we’re used to losing time on possible vulnerabilities that turn out to be unexploitable, but with these new taskflows, we can now spend more of our time on manually verifying the results and sending out reports. Furthermore, the severity of the vulnerabilities that we’re reporting is uniformly high. Many of them are authorization bypasses or information disclosure vulnerabilities that allow one user to login as somebody else or to access the private data of another user.

Using these taskflows, we’ve reported more than 80 vulnerabilities so far.  — Read More

#cyber

The “Last Mile” Problem Slowing AI Transformation

Executives are increasingly enamored with the promise of an AI-driven transformation and have invested accordingly. Most large-scale companies have initiated hundreds of pilots and provided widespread access to tools like Copilot and ChatGPT.

But while many of these pilots have succeeded individually—they’ve saved time and money, made processes more efficient—those gains haven’t scaled across the organization. Few companies have been able to fundamentally change their operating and business models around AI. — Read More

#strategy

The Capability Maturity Model for AI in Design

Matt Davey, who is Chief Experience Officer at 1Password, created a useful capability maturity model for AI in design. His original model has 5 levels (Limited, Reactive, Developing, Embedded, and Leading), each of which differs along 6 characteristics (Leadership on AI, Strategy & Budgeting, AI Culture & Talent, AI Learning & Enablement, AI Agents & Automation, and AI Product Design). Thus, the model covers both the use of AI within the design process and the use of AI in the resulting product. I recommend you read the full thing, but here is a summary of Davey’s 5 capability maturity levels for AI in design.

As discussed below, I added Maturity Level 6, Symbiotic, for a more complete capability maturity ladder.

For a summary of this article, watch my short overview explainer video (YouTube, 6 min.). — Read More

#devops, #vfx