Why the AI world is suddenly obsessed with a 160-year-old economics paradox

Last week, news spread that a Chinese AI company, DeepSeek, had built a cutting-edge chatbot at a fraction of the cost of its American competitors. It sent the stock prices of American tech companies plummeting.

But Microsoft CEO Satya Nadella put a happy spin on the whole episode, citing a 160-year-old economics concept to suggest that this was good news.

“Jevons paradox strikes again!” Nadella wrote on social media, sharing the concept’s Wikipedia page. “As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can’t get enough of.” — Read More

#strategy

The Rise of DeepSeek: What the Headlines Miss

on their impressive benchmark performance and efficiency gains. While these achievements deserve recognition and carry policy implications (more below), the story of compute access, export controls, and AI development is more complex than many reports suggest. [This article covers additional] key points that deserve more attention.

… Export controls will affect China’s AI ecosystem through reduced deployment capabilities, limited company growth, and constraints on synthetic training and self-play capabilities.

… DeepSeek’s achievements are genuine and significant. Claims dismissing their progress as mere propaganda miss the mark. — Read More

#china-ai

Lessons from red teaming 100 generative AI products

In recent years, AI red teaming has emerged as a practice for probing the safety and security of generative AI systems. Due to the nascency of the field, there are many open questions about how red teaming operations should be conducted. Based on our experience red teaming over 100 generative AI products at Microsoft, we present our internal threat model ontology and eight main lessons we have learned:

  1. Understand what the system can do and where it is applied
  2. You don’t have to compute gradients to break an AI system
  3. AI red teaming is not safety benchmarking
  4. Automation can help cover more of the risk landscape
  5. The human element of AI red teaming is crucial
  6. Responsible AI harms are pervasive but difficult to measure
  7. Large language models (LLMs) amplify existing security risks and introduce new ones
  8. The work of securing AI systems will never be completed

By sharing these insights alongside case studies from our operations, we offer practical recommendations aimed at aligning red teaming efforts with real world risks. We also highlight aspects of AI red teaming that we believe are often misunderstood and discuss open questions for the field to consider. Read More

#cyber

Alibaba announces Qwen 2.5-Max to fight DeepSeek — what to know

Days after DeepSeek took the internet by storm, Chinese tech company Alibaba announced Qwen 2.5-Max, the latest of its LLM series. The unveiling of this open-source agent can easily be perceived as a direct challenge to DeepSeek and domestic rivals. The release is on the first day of the Lunar New Year when most Chinese people have taken time off work to celebrate and spend time with their families. Alibaba seems to be sending the message that they are hard at work while their competition takes the day off. — Read More

#china-ai