How We Built an AI Second Brain for 60K Knowledge Workers

Knowledge workers at Meta routinely contend with workflow fragmentation, where critical information — including meeting notes, tasks, key decisions, and code context — is siloed across disparate platforms. Each new AI conversation starts cold: the same explanations, the same links, the same ten minutes of context-setting before any real work begins.

So we tested a simple hypothesis: what if an AI agent had persistent, structured access to everything a person is working on, and carried that context across every interaction? Not a chatbot that answers questions, but a working partner that tracks projects, reads meeting notes, surfaces connections, and builds on prior conversations.
<brthat ai="" second="" brain="" experiment,="" born="" in="" the="" analytics="" org,="" has="" since="" been="" adopted="" by="" over="" 60,000="" people="" across="" meta:="" engineers,="" pms,="" designers,="" legal,="" finance,="" communications,="" and="" sales.="" this="" post="" covers="" how="" it="" was="" built,="" grew,="" what="" we="" learned.="" –="" Read More

#big7

Google is testing AI chatbot search for YouTube

Google is bringing conversational AI search to YouTube, marking the company’s latest push to infuse its products with AI-powered discovery tools. The feature, dubbed “Ask YouTube,” started rolling out to YouTube Premium subscribers in the US today as an experimental test. It transforms the platform’s search bar into a chatbot-style interface that pulls results from longform videos, Shorts, and text summaries – essentially giving YouTube its own version of Google’s AI Mode for search. — Read More

#big7

Meta inks deal for solar power at night, beamed from space

The race to secure electricity for AI models has reached new heights: Meta has signed an agreement with the startup Overview Energy that could see a thousand satellites beam infrared light to solar farms that power data centers at night.

In 2024, Meta’s data centers used more than 18,000 gigawatt-hours of electricity — roughly enough to power more than 1.7 million American homes for a year — and its need for compute power is only increasing. The company has committed to building 30 gigawatts of renewable power sources, with a focus on industrial-scale solar power plants.

Typically, data centers turning to solar power must either invest in battery storage or rely on other generation sources to operate at night.

Overview Energy, a four-year-old, Ashburn, Virginia, outfit that emerged from stealth in December, has a different solution: The company is developing spacecraft that collect plentiful solar power in space. It then plans to convert that energy to near-infrared light and beam it at sufficiently large solar farms — on the order of hundreds of megawatts — which can convert that light to electricity. — Read More

#big7

Meta debuts the Muse Spark model in a ‘ground-up overhaul’ of its AI

Meta released an AI model on Wednesday called Muse Spark, which marks its “first step” toward an “overhaul of [its] AI efforts.”

Muse Spark is the inaugural model to come out of Meta Superintelligence Labs, which was created last year because CEO Mark Zuckerberg was reportedly unhappy with the progress of Meta and its Llama models and how they lagged behind OpenAI’s ChatGPT and Anthropic’s Claude. Meta recruited former Scale AI co-founder and CEO Alexandr Wang to lead Meta Superintelligence Labs and invested $14.3 billion in the data labeling company for a 49% stake.

Now, it’s time for Zuckerberg to see if his reconfigured AI team can woo users. — Read More

#big7

Measuring progress toward AGI: A cognitive framework

[Google is] introducing a framework to measure progress toward AGI, and launching a Kaggle hackathon to build the relevant evaluations.

Artificial General Intelligence (AGI) has the potential to accelerate scientific discovery and help solve some of humanity’s most pressing problems. But it can be difficult to know how close we are to this key milestone, because there’s a lack of empirical tools for evaluating systems’ general intelligence. Tracking progress toward AGI will require a wide range of methods and approaches, and we believe cognitive science provides one important piece of the puzzle.

That’s why today, we’re releasing a new paper, “Measuring Progress Toward AGI: A Cognitive Taxonomy,” that presents a scientific foundation for understanding the cognitive capabilities of AI systems.

Alongside the paper, we are partnering with Kaggle to launch a hackathon, inviting the research community to help build the evaluations needed to put this framework into practice. — Read More

#big7

Project Genie | Experimenting with infinite interactive worlds

Read More

#big7, #videos

Google Revealed “Attention Is All You Need” Part II

For years deep learning has followed one central idea. If we want smarter models, we stack more layers, run larger training, and scale everything upward. This simple formula has given us large language models that reason well and generate high-quality text. Yet they still share one huge weakness. They cannot learn on the fly. They cannot update themselves during use.

Any change needs heavy retraining, and this often destroys old knowledge.

Google Research recently published a paper called Nested Learning. It offers a very different way of thinking about how learning should work inside neural networks. The researchers claim that a model is not just a big stack of layers. It is a hierarchy of learners that operate at different timescales. If this view is correct, it could reshape how we build AI systems in the coming years. — Read More

#big7

Apple’s  AI Game is Misunderstood

Apple’s AI strategy has become a Rorschach test for the technology industry. Critics see a company falling dangerously behind. Needham analyst Laura Martin claims it is one to two years behind its competitors. But almost all of this commentary, whether bullish or bearish, focuses on the wrong question.

The standard narrative compares Apple’s AI capex to Microsoft’s, Apple’s Siri to Google’s Gemini, Apple’s foundation models to OpenAI’s GPT-4. By these metrics, Apple looks behind. But these comparisons assume Apple is trying to win the same race. The evidence suggests it isn’t. — Read More

#big7

Meta’s ‘Avocado’ AI Model Delayed as Internal Tensions Rise

Meta is scrambling to deliver its next frontier AI model, codenamed Avocado, as internal friction mounts over the company’s shifting strategy from open-source Llama models to proprietary development. The social media giant’s $14.3 billion bet on new AI leadership is creating cultural clashes while competitors like OpenAI and Google pull ahead in the AI race.

Meta is facing its biggest AI reckoning yet: Avocado, won’t arrive until the first quarter of 2026. … The delay represents more than just technical challenges. According to sources familiar with the project, Avocado is wrestling with training-related performance testing as Meta tries to ensure the system will be competitive when it debuts.  — Read More

#big7

How Google Pulled Off Its Stunning, Rapid-Fire AI Turnaround

Google came into 2025 with its AI stumbles looming large. The company’s slow start to the generative AI race turned borderline catastrophic in 2024 when its products generated images of diverse Nazis, told users to eat rocks, and couldn’t match OpenAI’s shine. AI chat was seen as a major threat to search, and outsiders didn’t see a coherent strategy. In January, Google stock was on the sale rack and murmurs about CEO Sundar Pichai’s job security floated around the internet.

We’re not quite in December and Google has masterfully reversed course. Its AI models are world class. Its products are buzzy again. Its cloud business is booming. And search is stronger than ever. Its stock is up 56% this year and, at $3.59 trillion, it just surpassed Microsoft’s market cap. Now, no serious person would question Pichai’s job status. – Read More

#big7