It’s difficult to overstate the magnitude and impact of the changes Google has been making to its search engine and overall product suite this month, some of which were laid out during Tuesday’s I/O 2024 conference. The reason is not just that parent company Alphabet is determined to shove some form of “artificial intelligence” and machine learning software into your Chrome browser and your phone calls and your photo galleries and your YouTube habits. It’s that the central tool that powers and shapes the modern internet is about to permanently change—and it may make for an even worse search experience than that which has defined Google’s most recent era.
Google Search, that powerful, white, oblong textbox that became the default portal for organizing, showcasing, platforming, exploring, optimizing, and determining the ultimate reach of every single webpage across the entirety of cyberspace (often by paying other gatekeepers to favor it over other search tools), is becoming something else entirely: a self-ingesting singular webpage of its own, powered by the breadth of web information to which it once gave you access. Google is attempting to transform itself from a one-stop portal into a one-stop shop via “search generative experience,” where the Gemini chatbot will spit out a general “AI Overview” answer at the top of your search results. These answers will be informed by (or even plagiarized from) the very links now crowded out by a chatbox.
Yet the company doesn’t seem to want you to know anything about that. — Read More
Monthly Archives: May 2024
Google targets filmmakers with Veo, its new generative AI video model
It’s been three months since OpenAI demoed its captivating text-to-video AI, Sora, and now Google is trying to steal some of that spotlight. Announced during its I/O developer conference on Tuesday, Google says Veo — its latest generative AI video model — can generate “high-quality” 1080p resolution videos over a minute in length in a wide variety of visual and cinematic styles.
Veo has “an advanced understanding of natural language,” according to Google’s press release, enabling the model to understand cinematic terms like “timelapse” or “aerial shots of a landscape.” Users can direct their desired output using text, image, or video-based prompts, and Google says the resulting videos are “more consistent and coherent,” depicting more realistic movement for people, animals, and objects throughout shots. — Read More
What GPT-4o illustrates about AI Regulation
Sam Hammond of the Foundation for American Innovation published his 95 Theses on AI last week. I believe that this post, like some of Hammond’s other writing, suffers from misplaced negativity and overconfidence in some assertions (biology, for example, is always more complicated than you think). …[T]here is one of the theses that deserves greater attention, about regulatory approaches to AI:
The dogma that we should only regulate technologies based on “use” or “risk” may sound more market-friendly, but often results in a far broader regulatory scope than technology-specific approaches (see: the EU AI Act)
Zvi Moshowitz picked up on this too: …”When you regulate ‘use’ or ‘risk’ you need to check on everyone’s ‘use’ of everything, and you make a lot of detailed micro interventions, and everyone has to file lots of paperwork and do lots of dumb things, and the natural end result is universal surveillance and a full ‘that which is not compulsory is forbidden’ regime across much of existence.”
… This is a serious misunderstanding. — Read More
Where Does China Stand in the AI Wave?
Debates and discussions by Western public intellectuals on AI governance are closely followed in China. Whenever prominent figures like Sam Altman, Joshua Bengio, or Stuart Russell give interviews, multiple Chinese media outlets swiftly translate and analyze their remarks.
English-speaking audiences, however, seldom engage with the AI governance perspectives offered by Chinese public intellectuals.
In this article, ChinaTalk presents the highlights and a full translation of a panel discussion on AI (archived here) that took place six weeks ago in Beijing. Hosted by the non-profit organization “The Intellectual” 知识分子 — whose public WeChat account serves as a platform for discussions on scientific issues and their governance implications — the panelists delved into a wide range of topics, including:
— the state of China’s AI industry, discussing the biggest bottlenecks, potential advantages in AI applications, and the role of the government in supporting domestic AI development;
— the technical aspects of AI, such as whether Sora understands physics, the reliance on the Transformer architecture, and how far we are from true AGI;
— and the societal implications — which jobs will be replaced by AI first, whether open- or closed-source is better for AI safety, and if AI developers should dedicate more resources to AI safety. — Read More
The Great Flattening
Apple did what needed to be done to get that unfortunate iPad ad out of the news; you know, the one that somehow found the crushing of musical instruments and bottles of paint to be inspirational:
…Creativity is in our DNA at Apple, and it’s incredibly important to us to design products that empower creatives all over the world…Our goal is to always celebrate the myriad of ways users express themselves and bring their ideas to life through iPad. We missed the mark with this video, and we’re sorry.
The apology comes across as heartfelt — accentuated by the fact that an Apple executive put his name to it — but I disagree with Myhren: the reason why people reacted so strongly to the ad is that it couldn’t have hit the mark more squarely. — Read More
Is AI lying to me? Scientists warn of growing capacity for deception
They can outwit humans at board games, decode the structure of proteins and hold a passable conversation, but as AI systems have grown in sophistication so has their capacity for deception, scientists warn.
The analysis, by Massachusetts Institute of Technology (MIT) researchers, identifies wide-ranging instances of AI systems double-crossing opponents, bluffing and pretending to be human. One system even altered its behaviour during mock safety tests, raising the prospect of auditors being lured into a false sense of security. — Read More
Read the Paper
OpenAI Launches GPT-4o and More Features for ChatGPT
If you’re using the free version of ChatGPT, you’re about to get a boost. On Monday, OpenAI debuted a new flagship model of its underlying engine, called GPT-4o, along with key changes to its user interface.
The chatbot, which sparked a whole new wave of consumer-friendly AI, comes in two flavors: the free version, ChatGPT 3.5, and a version that costs $20 per month, ChatGPT 4.0. With that subscription fee, you get access to a large language model that can handle a lot more data as it generates responses to your prompts.
GPT-4o should close that gap, at least somewhat. Your interactions with ChatGPT will also become more conversational. — Read More
The teens making friends with AI chatbots
Teens are opening up to AI chatbots as a way to explore friendship. But sometimes, the AI’s advice can go too far.
Early last year, 15-year-old Aaron was going through a dark time at school. He’d fallen out with his friends, leaving him feeling isolated and alone.
… “I’m not going to lie,” Aaron said. “I think I may be a little addicted to it.”
Aaron is one of many young users who have discovered the double-edged sword of AI companions. Many users like Aaron describe finding the chatbots helpful, entertaining, and even supportive. But they also describe feeling addicted to chatbots, a complication which researchers and experts have been sounding the alarm on. It raises questions about how the AI boom is impacting young people and their social development and what the future could hold if teenagers — and society at large — become more emotionally reliant on bots. — Read More
ChatGPT and the Futureof the Human Mind
AI is a lever that becomes a lens
I remember when I first saw GPT-3 output writing: that line of letters hammered out one by one, rolling horizontally across the screen in its distinctive staccato. It struck both wonder and terror into my heart.
I felt ecstatic that computers could finally talk back to me. But I also felt a heavy sense of dread. I’m a writer—what would happen to me?
We’ve all had this experience with AI over the last year and a half. It is an emotional rollercoaster. It feels like it threatens our conception of ourselves. — Read More
Kingdom of the Planet of the Apes’ VFX lead argues that the movie uses AI ethically
RightRight now, every industry faces discussions about how artificial intelligence might help or hinder work. In movies, creators are concerned that their work might be stolen to train AI replacements, their future jobs might be taken by machines, or even that the entire process of filmmaking could become fully automated, removing the need for everything from directors to actors to everybody behind the scenes.
But “AI” is far more complicated than ChatGPT and Sora, the kinds of publicly accessible tools that crop up on social media. For visual effects artists, like those at Wētā FX who worked on Kingdom of the Planet of the Apes, machine learning can be just another powerful tool in an artistic arsenal, used to make movies bigger and better-looking than before. Kingdom visual effects supervisor Erik Winquist sat down with Polygon ahead of the movie’s release and discussed the ways AI tools were key to making the movie, and how the limitations on those tools still make the human element key to the process. — Read More