YouTube Thinks AI Is Its Next Big Bang

Google figured out early on that video would be a great addition to its search business, so in 2005 it launched Google Video. Focused on making deals with the entertainment industry for second-rate content, and overly cautious on what users could upload, it flopped. Meanwhile, a tiny startup run by a handful of employees working above a San Mateo, California, pizzeria was exploding, simply by letting anyone upload their goofy videos and not worrying too much about who held copyrights to the clips. In 2006, Google snapped up that year-old company, figuring it would sort out the IP stuff later. (It did.) Though the $1.65 billion purchase price for YouTube was about a billion dollars more than its valuation, it was one of the greatest bargains ever. YouTube is now arguably the most successful video property in the world. It’s an industry leader in music and podcasting, and more than half of its viewing time is now on living room screens. It has paid out over $100 billion to creators since 2021. One estimate from MoffettNathanson analysts cited by Variety is that if it were a separate company, it might be worth $550 billion.

Now the service is taking what might be its biggest leap yet, embracing a new paradigm that could change its essence. I’m talking, of course, about AI. Since YouTube is still a wholly owned subsidiary of AI-obsessed Google, it’s not surprising that its anniversary product announcements this week touted AI features that will let creators use AI to enhance or produce videos. After all, Google Deepmind’s Veo 3 technology was YouTube’s for the taking. Ready or not, the video camera ultimately will be replaced by the prompt. This means a rethinking of YouTube’s superpower: authenticity. — Read More

#strategy

Effective context engineering for AI agents

After a few years of prompt engineering being the focus of attention in applied AI, a new term has come to prominence: context engineering. Building with language models is becoming less about finding the right words and phrases for your prompts, and more about answering the broader question of “what configuration of context is most likely to generate our model’s desired behavior?”

Context refers to the set of tokens included when sampling from a large-language model (LLM). The engineering problem at hand is optimizing the utility of those tokens against the inherent constraints of LLMs in order to consistently achieve a desired outcome. Effectively wrangling LLMs often requires thinking in context — in other words: considering the holistic state available to the LLM at any given time and what potential behaviors that state might yield.

In this post, we’ll explore the emerging art of context engineering and offer a refined mental model for building steerable, effective agents. — Read More

#nlp