AI eats the web

Google’s shift toward AI-generated search results, displacing the familiar list of links, is rewiring the internet — and could accelerate the decline of the 30+-year-old World Wide Web.

Why it matters: A world where Google answers most questions in a single machine voice makes online life more convenient — and duller.

— The change also threatens to cut into Google’s revenue from search ads, and starve future AIs of the human data they’ll need. — Read More

#big7

Google Is About to Change Everything—and Hopes You Won’t Find Out

It’s difficult to overstate the magnitude and impact of the changes Google has been making to its search engine and overall product suite this month, some of which were laid out during Tuesday’s I/O 2024 conference. The reason is not just that parent company Alphabet is determined to shove some form of “artificial intelligence” and machine learning software into your Chrome browser and your phone calls and your photo galleries and your YouTube habits. It’s that the central tool that powers and shapes the modern internet is about to permanently change—and it may make for an even worse search experience than that which has defined Google’s most recent era.

Google Search, that powerful, white, oblong textbox that became the default portal for organizing, showcasing, platforming, exploring, optimizing, and determining the ultimate reach of every single webpage across the entirety of cyberspace (often by paying other gatekeepers to favor it over other search tools), is becoming something else entirely: a self-ingesting singular webpage of its own, powered by the breadth of web information to which it once gave you access. Google is attempting to transform itself from a one-stop portal into a one-stop shop via “search generative experience,” where the Gemini chatbot will spit out a general “AI Overview” answer at the top of your search results. These answers will be informed by (or even plagiarized from) the very links now crowded out by a chatbox.

Yet the company doesn’t seem to want you to know anything about that. — Read More

#big7

New Microsoft AI model may challenge GPT-4 and Google Gemini

Microsoft is working on a new large-scale AI language model called MAI-1, which could potentially rival state-of-the-art models from Google, Anthropic, and OpenAI, according to a report by The Information. This marks the first time Microsoft has developed an in-house AI model of this magnitude since investing over $10 billion in OpenAI for the rights to reuse the startup’s AI models. OpenAI’s GPT-4 powers not only ChatGPT but also Microsoft Copilot. — Read More

#big7

How Meta is paving the way for synthetic social networks

On Thursday, the AI hype train rolled through Meta’s family of apps. The company’s Meta AI assistant, a ChatGPT-like bot that can answer a wide range of questions, is beginning to roll out broadly across Facebook, Messenger, Instagram and WhatsApp.

Powering the bot is Llama 3, the latest and most capable version of Meta’s large language model. As with its predecessors — and in contrast to models from OpenAI, Google, and Anthropic — Llama 3 is open source. Today Meta made it available in two sizes: one with 8 billion parameters, and one with 70 billion parameters. (Parameters are the variables inside a large language model; in general, the more parameters a model contains, the smarter and more sophisticated its output.) — Read More

#big7, #devops

Google Consolidates AI-Building Teams Across Research and DeepMind

Google is consolidating the teams that focus on building artificial intelligence (AI) models across Google Research and Google DeepMind

All this work will now be done within Google DeepMind, Sundar Pichai, CEO of Google and Alphabet, said in a note to employees posted on the company’s website Thursday (April 18). — Read More

#big7, #strategy

Amazon CEO: “We’re deeply investing” in generative AI

Amazon CEO Andy Jassy revealed details about the company’s investments in generative AI in his annual shareholder letter published Thursday morning.

…[T]here are three distinct layers in the GenAI stack, each of which is gigantic, and each of which we’re deeply investing,” Jassy writes.

The “bottom layer” of Amazon’s AI strategy is to help developers and companies train models and produce predictions. Amazon says having its own custom AI training and inference chips will bring down costs for customers.

A “middle layer” serves companies that want to use their own data to customize existing foundational models and gain security and other features to build and scale generative AI applications.

The “top layer” is where Amazon builds generative AI applications for its own consumer businesses. For example, there’s “Rufus,” Amazon’s AI-powered shopping assistant, and the Amazon Web Services “Amazon Q.”

Read More

#big7

AMAZON GIVES ANTHROPIC $2.75 BILLION SO IT CAN SPEND IT ON AWS XPUS

If Microsoft has the half of OpenAI that didn’t leave, then Amazon and its Amazon Web Services cloud division needs the half of OpenAI that did leave – meaning Anthropic. And that means Amazon needs to pony up a lot more money than Google, which has also invested in Anthropic but which also has its own Gemini LLM, if it hopes to have more leverage – and get the GPU system rentals in return.

We live in strange times. … Microsoft investing $13 billion in OpenAI – with a $10 billion promise last year – and now Amazon making good on its promise to invest $4 billion in Anthropic by kicking in the second traunch of $2.75 billion is a brilliant way to buy a stake in any AI startup. You get access to the startup’s models, you get a sense of their roadmap, and you get to be the first one to commercialize their products at scale.

As we have pointed out before, … [t]here is a danger of this looking like roundtripping, where the money just moves from the IT giant to the AI startup as an investment and then back again to the IT giant. (This kind of thing used to happen in the IT channel from time to time.) It would be enlightening to see how these deals are really structured. But there is a likelihood that they are really minority stakes in the AI startups for enormous sums and an actual exchange of goods and services on the part of both parties. — Read More

#big7, #strategy

Facebook Is Filled With AI-Generated Garbage—and Older Adults Are Being Tricked

As AI-generated content proliferates online and clutters social media feeds, you may have noticed more images cropping up that invoke the uncanny valley effect—relatively normal scenes that also carry surreal details like excess fingers or gibberish words.

Among these misleading posts, young users have spotted some obviously faux images (for example, skiing dogs and toddlersbaffling “hand-carved” ice sculptures and massive crocheted cats). But AI-made art isn’t evident to everyone: It seems that older users—generally those in Generation X and above—are falling for these visuals en masse on social media. It’s not just evidenced by TikTok videos and a cursory glance at your mom’s Facebook activity either—there’s data behind it.

This platform has become increasingly popular with seniors to find entertainment and companionship as younger users have departed for flashier apps like TikTok and Instagram. Recently, Facebook’s algorithm seems to be pushing wacky AI images on users’ feeds to sell products and amass followings, according to a preprint paper announced on March 18 from researchers at Stanford University and Georgetown University. — Read More

Read the Paper

#big7, #fake

Google’s new AI will play video games with you — but not to win

Google DeepMind unveiled SIMA, an AI agent training to learn gaming skills so it plays more like a human instead of an overpowered AI that does its own thing. SIMA, which stands for Scalable, Instructable, Multiworld Agent, is currently only in research.

SIMA will eventually learn how to play any video game, even games with no linear path to end the game and open-world games. Though it’s not intended to replace existing game AI, think of it more as another player that meshes well with your party. It mixes natural language instruction with understanding 3D worlds and image recognition. — Read More

#big7

Google’s new Gemini model can analyze an hour-long video — but few people can use it

Last October, a research paper published by a Google data scientist, the CTO of Databricks Matei Zaharia and UC Berkeley professor Pieter Abbeel posited a way to allow GenAI models — i.e. models along the lines of OpenAI’s GPT-4 and ChatGPT — to ingest far more data than was previously possible. In the study, the co-authors demonstrated that, by removing a major memory bottleneck for AI models, they could enable models to process millions of words as opposed to hundreds of thousands — the maximum of the most capable models at the time.

AI research moves fast, it seems.

Today, Google announced the release of Gemini 1.5 Pro, the newest member of its Gemini family of GenAI models. Designed to be a drop-in replacement for Gemini 1.0 Pro (which formerly went by “Gemini Pro 1.0” for reasons known only to Google’s labyrinthine marketing arm), Gemini 1.5 Pro is improved in a number of areas compared with its predecessor, perhaps most significantly in the amount of data that it can process. — Read More

#big7