ChatGPT was released just nine months ago, and we are still learning how it will affect our daily lives, our careers, and even our systems of self-governance.
But when it comes to how AI may threaten our democracy, much of the public conversation lacks imagination. People talk about the danger of campaigns that attack opponents with fake images (or fake audio or video) because we already have decades of experience dealing with doctored images. We’re on the lookout for foreign governments that spread misinformation because we were traumatized by the 2016 US presidential election. And we worry that AI-generated opinions will swamp the political preferences of real people because we’ve seen political “astroturfing”—the use of fake online accounts to give the illusion of support for a policy—grow for decades. — Read More
Monthly Archives: August 2023
Can you trust AI? Here’s why you shouldn’t
If you ask Alexa, Amazon’s voice assistant AI system, whether Amazon is a monopoly, it responds by saying it doesn’t know. It doesn’t take much to make it lambaste the other tech giants, but it’s silent about its own corporate parent’s misdeeds.
When Alexa responds in this way, it’s obvious that it is putting its developer’s interests ahead of yours. Usually, though, it’s not so obvious whom an AI system is serving. To avoid being exploited by these systems, people will need to learn to approach AI skeptically. That means deliberately constructing the input you give it and thinking critically about its output. — Read More
Every Amazon division is working on generative AI projects
Just like pretty much every other major tech company, Amazon is placing a heavy focus on generative artificial intelligence. CEO Andy Jassy noted on Amazon’s latest earnings call that every division has multiple generative AI projects in the works.
“Inside Amazon, every one of our teams is working on building generative AI applications that reinvent and enhance their customers’ experience,” Jassy said. “But while we will build a number of these applications ourselves, most will be built by other companies, and we’re optimistic that the largest number of these will be built on [Amazon Web Services]. Remember, the core of AI is data. People want to bring generative AI models to the data, not the other way around.” — Read More
XQ-58 Valkyrie Solves Air Combat ‘Challenge Problem’ While Under AI Control
One of the U.S. Air Force’s stealthy XQ-58A Valkyrie drones recently completed a successful test flight demonstrating the ability to carry out aerial combat tasks autonomously using new artificial intelligence-driven software. The service says the test is part of a tiered approach to maturing autonomy “agents,” which involves training algorithms millions of times first in simulations and other testing. This includes the Collaborative Combat Aircraft program, or CCA, a key part of the larger Next Generation Air Dominance modernization initiative. — Read More
IBM and NASA teamed up to build the GPT of Earth sciences
NASA estimates that its Earth science missions will generate around a quarter million terabytes of data in 2024 alone. In order for climate scientists and the research community efficiently dig through these reams of raw satellite data, IBM, HuggingFace and NASA have collaborated to build an open-source geospatial foundation model that will serve as the basis for a new class of climate and Earth science AIs that can track deforestation, predict crop yields and rack greenhouse gas emissions. — Read More
Open sourcing AudioCraft: Generative AI for audio made simple and available to all
Imagine a professional musician being able to explore new compositions without having to play a single note on an instrument. Or an indie game developer populating virtual worlds with realistic sound effects and ambient noise on a shoestring budget. Or a small business owner adding a soundtrack to their latest Instagram post with ease. That’s the promise of AudioCraft — our simple framework that generates high-quality, realistic audio and music from text-based user inputs after training on raw audio signals as opposed to MIDI or piano rolls.
AudioCraft consists of three models: MusicGen, AudioGen, and EnCodec. MusicGen, which was trained with Meta-owned and specifically licensed music, generates music from text-based user inputs, while AudioGen, which was trained on public sound effects, generates audio from text-based user inputs. Today, we’re excited to release an improved version of our EnCodec decoder, which allows for higher quality music generation with fewer artifacts; our pre-trained AudioGen model, which lets you generate environmental sounds and sound effects like a dog barking, cars honking, or footsteps on a wooden floor; and all of the AudioCraft model weights and code. The models are available for research purposes and to further people’s understanding of the technology. We’re excited to give researchers and practitioners access so they can train their own models with their own datasets for the first time and help advance the state of the art. — Read More
Google’s AI search is getting more video and better links
Google’s AI-powered Search Generative Experience is getting a big new feature: images and video. If you’ve enabled the AI-based SGE feature in Search Labs, you’ll now start to see more multimedia in the colorful summary box at the top of your search results. Google’s also working on making that summary box appear faster and adding more context to the links it puts in the box.
SGE may still be in the “experiment” phase, but it’s very clearly the future of Google Search. “It really gives us a chance to, now, not always be constrained in the way search was working before,” CEO Sundar Pichai said on Alphabet’s most recent earnings call. “It allows us to think outside the box.” He then said that “over time, this will just be how search works.” — Read More
Moviewiser
ChinAI #231: Latest SuperCLUE rankings of large language models
Context: Back in ChinAI #224, we highlighted the SuperCLUE benchmark, released in May, which aimed to test large language models from Chinese and international labs along three main dimensions: 1) foundational capabilities such as dialogue and coding; 2) specialized and academic capabilities like physics knowledge; and 3) capabilities in Chinese-language particularities such as knowledge of classical Chinese literature and Chinese idioms. Last week, the SuperCLUE team released its July rankings (link to original Chinese), updated with 3700 confidential test questions and 20 total participating models. — Read More
#china-ai