Google pulls AI model after senator says it fabricated assault allegation

Google says it has pulled AI model Gemma from its AI Studio platform after a Republican senator complained the model, designed for developers, “fabricated serious criminal allegations” about her.

In a post on X, Google’s official news account said the company had “seen reports of non-developers trying to use Gemma in AI Studio and ask it factual questions.” AI Studio is a platform for developers and not a conventional way for regular consumers to access Google’s AI models. Gemma is specifically billed as a family of AI models for developers to use, with variants for medical usecoding, and evaluating text and image content. — Read More

#fake

Sam Altman says that bots are making social media feel ‘fake’

X enthusiast and Reddit shareholder Sam Altman had an epiphany on Monday: Bots have made it impossible to determine whether social media posts are really written by humans, he posted.

The realization came while reading (and sharing) some posts from the r/Claudecode subreddit, which were praising OpenAI Codex. OpenAI launched the software programming service that takes on Anthropic’s Claude Code in May. — Read More

#fake

‘Positive review only’: Researchers hide AI prompts in papers

Research papers from 14 academic institutions in eight countries — including Japan, South Korea and China — contained hidden prompts directing artificial intelligence tools to give them good reviews, Nikkei has found.

Nikkei looked at English-language preprints — manuscripts that have yet to undergo formal peer review — on the academic research platform arXiv.

It discovered such prompts in 17 articles, whose lead authors are affiliated with 14 institutions including Japan’s Waseda University, South Korea’s KAIST, China’s Peking University and the National University of Singapore, as well as the University of Washington and Columbia University in the U.S. Most of the papers involve the field of computer science. — Read More

#fake

Deepfakes Now Outsmarting Detection By Mimicking Heartbeats

The assumption that deepfakes lack physiological signals, such as heart rate, is no longer valid. Recent research reveals that high-quality deepfakes unintentionally retain the heartbeat patterns from their source videos, undermining traditional detection methods that relied on detecting subtle skin color changes linked to heartbeats. Researchers suggest shifting focus from just detecting heart rate signals to analyzing how blood flow is distributed across different facial regions, providing a more accurate detection strategy. — Read More

#fake

A ‘True Crime’ Documentary Series Has Millions of Views. The Murders Are All AI-Generated

Elizabeth Hernandez found out about the decade-old murder from a flurry of tips sent to her newsroom in August last year.

The tips were all reacting to a YouTube video with a shocking title: “Husband’s Secret Gay Love Affair with Step Son Ends in Grisly Murder.” It described a gruesome crime that apparently took place in Littleton, Colorado. Almost two million people had watched it.

“Some people in fact were saying, ‘Why didn’t The Denver Post cover this?’” Hernandez, a reporter at the paper, told me. “Because in the video, it makes it sound like it was a big news event and yet, when you Google it, there is no coverage.”

The reason for the lack of coverage was pretty clear to her. … The murder was fake, and the video was made using generative AI. — Read More

#fake

Scalable watermarking for identifying large language model outputs

Large language models (LLMs) have enabled the generation of high-quality synthetic text, often indistinguishable from human-written content, at a scale that can markedly affect the nature of the information ecosystem1–3. Watermarking can help identify synthetic text and limit accidental or deliberate misuse⁴, but has not been adopted in production systems owing to stringent quality, detectability and computational efficiency requirements. Here we describe SynthID-Text, a production-ready text watermarking scheme that preserves text quality and enables high detection accuracy, with minimal latency overhead. SynthID-Text does not affect LLM training and modifies only the sampling procedure; watermark detection is computationally efficient, without using the underlying LLM. To enable watermarking at scale, we develop an algorithm integrating watermarking with speculative sampling, an efficiency technique frequently used in production systems⁵. Evaluations across multiple LLMs empirically show that SynthID-Text provides improved detectability over comparable methods, and standard benchmarks and human side-by-side ratings indicate no change in LLM capabilities. To demonstrate the feasibility of watermarking in large-scale-production systems, we conducted a live experiment that assessed feedback from nearly 20 million Gemini⁶ responses, again confirming the preservation of text quality. We hope that the availability of SynthID-Text⁷ will facilitate further development of watermarking and responsible use of LLM systems. — Read More

#fake

AI and the 2024 US Elections

For years now, AI has undermined the public’s ability to trust what it sees, hears, and reads. The Republican National Committee released a provocative ad offering an “AI-generated look into the country’s possible future if Joe Biden is re-elected,” showing apocalyptic, machine-made images of ruined cityscapes and chaos at the border. Fake robocalls purporting to be from Biden urged New Hampshire residents not to vote in the 2024 primary election. This summer, the Department of Justice cracked down on a Russian bot farm that was using AI to impersonate Americans on social media, and OpenAI disrupted an Iranian group using ChatGPT to generate fake social-media comments.

It’s not altogether clear what damage AI itself may cause, though the reasons for concern are obvious—the technology makes it easier for bad actors to construct highly persuasive and misleading content. With that risk in mind, there has been some movement toward constraining the use of AI, yet progress has been painstakingly slow in the area where it may count most: the 2024 election. — Read More

#fake

How to spot a deepfake

Deepfake technology and the malevolent use of AI is causing widespread anxiety, especially as we approach November’s U.S. election. Adobe’s Scott Belsky joins Rapid Response host Bob Safian to explain how deepfakes are actually created, and how developers like Adobe are pioneering new ways to verify human-generated content for everyday consumers. Belsky also shares valuable insights about how AI could usher in an era of prosperity for small businesses — plus how it will inevitably shift our perception of what makes a piece of work ‘art.’ — Read More

#fake, #podcasts

New AI algorithm flags deepfakes with 98% accuracy — better than any other tool out there right now

With the release of artificial intelligence (AI) video generation products like Sora and Luma, we’re on the verge of a flood of AI-generated video content, and policymakers, public figures and software engineers are already warning about a deluge of deepfakes. Now it seems that AI itself might be our best defense against AI fakery after an algorithm has identified telltale markers of AI videos with over 98% accuracy.

The irony of AI protecting us against AI-generated content is hard to miss, but as project lead Matthew Stamm, associate professor of engineering at Drexel University, said in a statement: “It’s more than a bit unnerving that [AI-generated video] could be released before there is a good system for detecting fakes created by bad actors.”

… The breakthrough, outlined in a study published April 24 to the pre-print server arXiv, is an algorithm that represents an important new milestone in detecting fake images and video content. That’s because many of the “digital breadcrumbs” existing systems look for in regular digitally edited media aren’t present in entirely AI-generated media. — Read More

#fake

Synthesia’s hyperrealistic deepfakes will soon have full bodies

Startup Synthesia’s AI-generated avatars are getting an update to make them even more realistic: They will soon have bodies that can move, and hands that gesticulate.

The new full-body avatars will be able to do things like sing and brandish a microphone while dancing, or move from behind a desk and walk across a room. They will be able to express more complex emotions than previously possible, like excitement, fear, or nervousness, says Victor Riparbelli, the company’s CEO. Synthesia intends to launch the new avatars toward the end of the year.  — Read More

#fake