Anthropic’s AI can now run and write code

Anthropic’s Claude chatbot can now write and run JavaScript code.

Today, Anthropic launched a new analysis tool that helps Claude respond with what the company describes as “mathematically precise and reproducible answers.” With the tool enabled — it’s currently in preview — Claude can perform calculations and analyze data from files like spreadsheets and PDFs, rendering the results as interactive visualizations. — Read More

#devops

Meta’s AI Abundance

Stratechery has benefited from a Meta cheat code since its inception: wait for investors to panic, the stock to drop, and write an Article that says Meta is fine — better than fine even — and sit back and watch the take be proven correct. Notable examples include 2013’s post-IPO swoon, the 2018 Stories swoon, and most recently, the 2022 TikTok/Reels swoon (if you want a bonus, I was optimistic during the 2020 COVID swoon too).

Perhaps with that in mind I wrote a cautionary note earlier this year about Meta and Reasonable Doubt: while investors were concerned about the sustainability of Meta’s spending on AI, I was worried about increasing ad prices and the lack of new formats after Stories and then Reels; the long-term future, particularly in terms of the metaverse, was just as much of a mystery as always.

Six months on and I feel the exact opposite: it seems increasingly clear to me that Meta is in fact the most well-placed company to take advantage of generative AI.  — Read More

#big7

Rage against the machine

For all the promise and dangers of AI, computers plainly can’t think. To think is to resist – something no machine does

Computers don’t actually do anything. They don’t write, or play; they don’t even compute. Which doesn’t mean we can’t play with computers, or use them to invent, or make, or problem-solve. The new AI is unexpectedly reshaping ways of working and making, in the arts and sciences, in industry, and in warfare. We need to come to terms with the transformative promise and dangers of this new tech. But it ought to be possible to do so without succumbing to bogus claims about machine minds.

What could ever lead us to take seriously the thought that these devices of our own invention might actually understand, and think, and feel, or that, if not now, then later, they might one day come to open their artificial eyes thus finally to behold a shiny world of their very own? One source might simply be the sense that, now unleashed, AI is beyond our control. Fast, microscopic, distributed and astronomically complex, it is hard to understand this tech, and it is tempting to imagine that it has power over us.

But this is nothing new. The story of technology – from prehistory to now – has always been that of the ways we are entrained by the tools and systems that we ourselves have made. — Read More

#human

Moving Data, Moving Target

Uncertainties remain in China’s overhauled cross-border data transfer regime

On March 22, 2024, the Cyberspace Administration of China (CAC) unveiled the current version of China’s rules governing outbound data transfers. The new “Provisions on Promoting and Regulating Cross-Border Data Flows” (or “2024 Provisions”) took effect immediately and eased restrictions affecting many businesses, while still underscoring the strength of the CAC’s authority over high-risk areas. For companies conducting data transfers falling within new exempted categories, the regulations brought relief after years of daunting uncertainty. Long reporting cycles, extensive preparation of materials, and long wait times for audit results had created seemingly insurmountable obstacles for businesses relying on data flows, leading to deep pessimism about China’s business environment.

The new rules, which eased burdens for some and pointed to possible solutions for others, were the latest chapter in a long story of regulatory uncertainty, and they won’t be the last. — Read More

#china

Scalable watermarking for identifying large language model outputs

Large language models (LLMs) have enabled the generation of high-quality synthetic text, often indistinguishable from human-written content, at a scale that can markedly affect the nature of the information ecosystem1–3. Watermarking can help identify synthetic text and limit accidental or deliberate misuse⁴, but has not been adopted in production systems owing to stringent quality, detectability and computational efficiency requirements. Here we describe SynthID-Text, a production-ready text watermarking scheme that preserves text quality and enables high detection accuracy, with minimal latency overhead. SynthID-Text does not affect LLM training and modifies only the sampling procedure; watermark detection is computationally efficient, without using the underlying LLM. To enable watermarking at scale, we develop an algorithm integrating watermarking with speculative sampling, an efficiency technique frequently used in production systems⁵. Evaluations across multiple LLMs empirically show that SynthID-Text provides improved detectability over comparable methods, and standard benchmarks and human side-by-side ratings indicate no change in LLM capabilities. To demonstrate the feasibility of watermarking in large-scale-production systems, we conducted a live experiment that assessed feedback from nearly 20 million Gemini⁶ responses, again confirming the preservation of text quality. We hope that the availability of SynthID-Text⁷ will facilitate further development of watermarking and responsible use of LLM systems. — Read More

#fake

Polish Radio Station Stirs Controversy by Replacing Hosts With AI

A Polish radio station is at the center of controversy after it replaced its hosts with Artificial Intelligence (AI) presenters.

OFF Radio Krakow, based in southern Poland, introduced three avatars in what it said was “the first experiment in Poland in which journalists…are virtual characters created by AI.”

The avatars were created in hopes of reaching a younger audience by using them to touch on cultural, art and social topics such as LGBTQ+ issues. — Read More

#news-summarization

HeyGen enables your digital twin to do Zoom calls for you

Video platform HeyGen has added a feature that it claims allows users to send AI-powered digital versions of themselves to Zoom meetings and other live interactions.

The avatars can join one or more meetings simultaneously, 24/7. They are designed to not only look and sound like the people they are representing, buy they will also think, talk and make decisions like them, according to HeyGen.

The HeyGen Interactive Avatar is equipped with OpenAI real-time voice integration, which allows it to hold an intelligent, efficient and timely conversation with any audience. — Read More

#vfx

Simplifying, Stabilizing and Scaling Continuous-Time Consistency Models

Consistency models (CMs) are a powerful class of diffusion-based generative models optimized for fast sampling. Most existing CMs are trained using discretized timesteps, which introduce additional hyperparameters and are prone to discretization errors. While continuous-time formulations can mitigate these issues, their success has been limited by training instability. To address this, we propose a simplified theoretical framework that unifies previous parameterizations of diffusion models and CMs, identifying the root causes of instability. Based on this analysis, we introduce key improvements in diffusion process parameterization, network architecture, and training objectives. These changes enable us to train continuous-time CMs at an unprecedented scale, reaching 1.5B parameters on ImageNet 512×512. Our proposed training algorithm, using only two sampling steps, achieves FID scores of 2.06 on CIFAR-10, 1.48 on ImageNet 64×64, and 1.88 on ImageNet 512×512, narrowing the gap in FID scores with the best existing diffusion models to within 10%. — Read More

#image-recognition

Bots, agents, and digital workers: AI is changing the very definition of work

Imagine a world where your digital colleague handles entire workflows, adapts to real-time challenges, and collaborates seamlessly with your human team. This isn’t science fiction—it’s the imminent reality of AI agents in the workplace. 

As Sam Altman, CEO of OpenAI, boldly predicted at their annual DevDay event, “2025 is when AI agents will work.” But what does this mean for the future of human labor, organizational structures, and the very definition of work itself? 

According to research by The Conference Board, 56% of workers use generative AI on the job, and nearly 1 in 10 use generative AI tools daily.  — Read More

#strategy

Lawsuit Argues Warrantless Use of Flock Surveillance Cameras Is Unconstitutional

“It is functionally impossible for people to drive anywhere without having their movements tracked, photographed, and stored in an AI-assisted database that enables the warrantless surveillance of their every move. This civil rights lawsuit seeks to end this dragnet surveillance program.”

A civil liberties organization has filed a federal lawsuit in Virginia arguing that widespread surveillance enabled by Flock, a company that sells networks of automated license plate readers, is unconstitutional under the Fourth Amendment. … “In Norfolk, no one can escape the government’s 172 unblinking eyes.”  — Read More

#surveillance