AI virtual personality YouTubers, or ‘VTubers,’ are earning millions

One of the most popular gaming YouTubers is named Bloo, but he isn’t a human — he’s a VTuber, a fully virtual personality powered by artificial intelligence.

VTubers first gained traction in Japan in the 2010s. Now, advances in AI are making it easier than ever to create VTubers, fueling a new wave of virtual creators on YouTube.

As AI-generated content becomes more common online, concerns about its impact are growing, especially as it becomes easier to generate convincing but entirely AI-fabricated videos. — Read More

How to Become a VTuber

#strategy

#vfx

Continuous AI in software engineering

When I use AI in my software engineering job, I use it “on tap”: when I have a problem that I’d like to run past the LLM, I go and do that, and then I return to my normal work.

Imagine if we used other software engineering tools like this – for instance when I have a problem that I’d like to solve with unit tests, I go and run the tests, before returning to my normal work. Or suppose when I want to type-check my codebase, I open a terminal and run npm run tsc. Would that be a sensible way of using tests and types?

Of course not. Tests and types, and many other programming tools, are used continuously: instead of a developer deciding to use them, they’re constantly run and checked via automation. Tests run in CI or as a pre-push commit hook. Types are checked on every compile, or even more often via IDE highlighting. A developer can choose to run these tools manually if they want, but they’ll also get value from them over time even if they never consciously trigger them. Having automatic tests and types raises the level of ambient intelligence in the software development lifecycle. — Read More

#devops

Google’s New AI App Doppl: Shopping Will NEVER Be The Same

Read More

#strategy-videos

Unpacking the bias of large language models

Research has shown that large language models (LLMs) tend to overemphasize information at the beginning and end of a document or conversation, while neglecting the middle.

This “position bias” means that, if a lawyer is using an LLM-powered virtual assistant to retrieve a certain phrase in a 30-page affidavit, the LLM is more likely to find the right text if it is on the initial or final pages.

MIT researchers have discovered the mechanism behind this phenomenon. … They found that certain design choices which control how the model processes input data can cause position bias. — Read More

#bias

Project Vend: Can Claude run a small shop? (And why does that matter?)

We let Claude manage an automated store in our office as a small business for about a month. We learned a lot from how close it was to success—and the curious ways that it failed—about the plausible, strange, not-too-distant future in which AI models are autonomously running things in the real economy. — Read More

#strategy

Machines of Faithful Obedience

Throughout history, technological and scientific advances have had both good and ill effects, but their overall impact has been overwhelmingly positive. Thanks to scientific progress, most people on earth live longer, healthier, and better than they did centuries or even decades ago.

I believe that AI (including AGI and ASI) can do the same and be a positive force for humanity. I also believe that it is possible to solve the “technical alignment” problem and build AIs that follow the words and intent of our instructions and report faithfully on their actions and observations.

… In the next decade, AI progress will be extremely rapid, and such periods of sharp transition can be risky.  What we — in industry, academia, and government—  do in the coming years will matter a lot to ensure that AI’s benefits far outweigh its costs. — Read More

#strategy

Using AI to identify cybercrime masterminds

Online criminal forums, both on the public internet and on the “dark web” of Tor .onion sites, are a rich resource for threat intelligence researchers.   The Sophos Counter Threat Unit (CTU) have a team of darkweb researchers collecting intelligence and interacting with darkweb forums, but combing through these posts is a time-consuming and resource-intensive task, and it’s always possible that things are missed.

As we strive to make better use of AI and data analysis,  Sophos AI researcher Francois Labreche, working with Estelle Ruellan of Flare and the Université de Montréal and Masarah Paquet-Clouston  of the Université de Montréal, set out to see if they could approach the problem of identifying key actors on the dark web in a more automated way. Their work, originally presented at the 2024 APWG Symposium on Electronic Crime Research, has recently been published as a paper. — Read More

#cyber

The New Skill in AI is Not Prompting, It’s Context Engineering

Context Engineering is new term gaining traction in the AI world. The conversation is shifting from “prompt engineering” to a broader, more powerful concept: Context EngineeringTobi Lutke describes it as “the art of providing all the context for the task to be plausibly solvable by the LLM.” and he is right.

With the rise of Agents it becomes more important what information we load into the “limited working memory”. We are seeing that the main thing that determines whether an Agents succeeds or fails is the quality of the context you give it. Most agent failures are not model failures anyemore, they are context failures. — Read More

#nlp

Mark Zuckerberg announces creation of Meta Superintelligence Labs. Read the memo

Mark Zuckerberg said Monday that he’s creating Meta Superintelligence Labs, which will be led by some of his company’s most recent hires, including Scale AI ex-CEO Alexandr Wang and former GitHub CEO Nat Friedman.

Zuckerberg said the new AI superintelligence unit, MSL, will house the company’s various teams working on foundation models such as the open-source Llama software, products and Fundamental Artificial Intelligence Research projects, according to an internal memo obtained by CNBC. — Read More

#big7

China’s biggest public AI drop since DeepSeek, Baidu’s open source Ernie, is about to hit the market

On Monday, Chinese technology giant Baidu is making its Ernie generative AI large language model open source, a move by China’s tech sector that could be its biggest in the AI race since the emergence of DeepSeek. The open sourcing of Ernie will be a gradual roll-out, according to the company. 

Will it be a shock to the market on the order of DeepSeek? That’s a question which divides AI experts. [Some] say Ernie’s release could cement China’s position as the undisputed AI leader. — Read More

#china-ai