For decades, understanding the clicks, whistles and burst pulses of dolphins has been a scientific frontier. What if we could not only listen to dolphins, but also understand the patterns of their complex communication well enough to generate realistic responses?
Today, on National Dolphin Day, Google, in collaboration with researchers at Georgia Tech and the field research of the Wild Dolphin Project (WDP), is announcing progress on DolphinGemma: a foundational AI model trained to learn the structure of dolphin vocalizations and generate novel dolphin-like sound sequences. This approach in the quest for interspecies communication pushes the boundaries of AI and our potential connection with the marine world. — Read More
Tag Archives: Big7
Meta defends Llama 4 release against ‘reports of mixed quality,’ blames bugs
Meta’s new flagship AI language model Llama 4 came suddenly over the weekend, with the parent company of Facebook, Instagram, WhatsApp and Quest VR (among other services and products) revealing not one, not two, but three versions — all upgraded to be more powerful and performant using the popular “Mixture-of-Experts” architecture and a new training method involving fixed hyperparameters, known as MetaP.
But following the surprise announcement and public release of two of those models for download and usage — the lower-parameter Llama 4 Scout and mid-tier Llama 4 Maverick — on Saturday, the response from the AI community on social media has been less than adoring. — Read More
Amazon Nova Reel 1.1: Featuring up to 2-minutes multi-shot videos
At re:Invent 2024, we announced Amazon Nova models, a new generation of foundation models (FMs), including Amazon Nova Reel, a video generation model that creates short videos from text descriptions and optional reference images (together, the “prompt”).
Today, we introduce Amazon Nova Reel 1.1, which provides quality and latency improvements in 6-second single-shot video generation, compared to Amazon Nova Reel 1.0. This update lets you generate multi-shot videos up to 2-minutes in length with consistent style across shots. You can either provide a single prompt for up to a 2-minute video composed of 6-second shots, or design each shot individually with custom prompts. This gives you new ways to create video content through Amazon Bedrock. — Read More
Gemini 2.5: Our most intelligent AI model
Today we’re introducing Gemini 2.5, our most intelligent AI model. Our first 2.5 release is an experimental version of 2.5 Pro, which is state-of-the-art on a wide range of benchmarks and debuts at #1 on LMArena by a significant margin.
Gemini 2.5 models are thinking models, capable of reasoning through their thoughts before responding, resulting in enhanced performance and improved accuracy. — Read More
DOJ: Google must sell Chrome, Android could be next
Google has gotten its first taste of remedies that Donald Trump’s Department of Justice plans to pursue to break up the tech giant’s monopoly in search. In the first filing since Trump allies took over the department, government lawyers backed off a key proposal submitted by the Biden DOJ. The government won’t ask the court to force Google to sell off its AI investments, and the way it intends to handle Android is changing. However, the most serious penalty is intact—Google’s popular Chrome browser is still on the chopping block. — Read More
You knew it was coming: Google begins testing AI-only search results
Google has become so integral to online navigation that its name became a verb, meaning “to find things on the Internet.” Soon, Google might just tell you what’s on the Internet instead of showing you. The company has announced an expansion of its AI search features, powered by Gemini 2.0. Everyone will soon see more AI Overviews at the top of the results page, but Google is also testing a more substantial change in the form of AI Mode. This version of Google won’t show you the 10 blue links at all—Gemini completely takes over the results in AI Mode. — Read More
Amazon is reportedly developing its own AI ‘reasoning’ model
According to Business Insider, Amazon is developing an AI model that incorporates advanced “reasoning” capabilities, similar to models like OpenAI’s o3-mini and Chinese AI lab DeepSeek’s R1. The model may launch as soon as June under Amazon’s Nova brand, which the company introduced at its re:Invent developer conference last year. — Read More
Google’s new AI generates hypotheses for researchers
Over the past few years, Google has embarked on a quest to jam generative AI into every product and initiative possible. Google has robots summarizing search results, interacting with your apps, and analyzing the data on your phone. And sometimes, the output of generative AI systems can be surprisingly good despite lacking any real knowledge. But can they do science?
Google Research is now angling to turn AI into a scientist—well, a “co-scientist.” The company has a new multi-agent AI system based on Gemini 2.0 aimed at biomedical researchers that can supposedly point the way toward new hypotheses and areas of biomedical research. However, Google’s AI co-scientist boils down to a fancy chatbot.
… The AI co-scientist contains multiple interconnected models that churn through the input data and access Internet resources to refine the output. Inside the tool, the different agents challenge each other to create a “self-improving loop,” which is similar to the new raft of reasoning AI models like Gemini Flash Thinking and OpenAI o3. — Read More
Google maps the future of AI agents: Five lessons for businesses
A new Google white paper, titled “Agents“, imagines a future where AI takes on a more active and independent role in business. Published without much fanfare in September, the 42-page document is now gaining attention on X.com (formerly Twitter) and LinkedIn.
It introduces the concept of AI agents — software systems designed to go beyond today’s AI models by reasoning, planning and taking actions to achieve specific goals. Unlike traditional AI systems, which generate responses based solely on pre-existing training data, AI agents can interact with external systems, make decisions and complete complex tasks on their own. — Read More