Why Are We Letting the AI Crisis Just Happen?

Bad actors could seize on large language models to engineer falsehoods at unprecedented scale.

New AI systems such as ChatGPT, the overhauled Microsoft Bing search engine, and the reportedly soon-to-arrive GPT-4 have utterly captured the public imagination. ChatGPT is the fastest-growing online application, ever, and it’s no wonder why. Type in some text, and instead of getting back web links, you get well-formed, conversational responses on whatever topic you selected—an undeniably seductive vision.

But the public, and the tech giants, aren’t the only ones who have become enthralled with the Big Data–driven technology known as the large language model. Bad actors have taken note of the technology as well. At the extreme end, there’s Andrew Torba, the CEO of the far-right social network Gab, who said recently that his company is actively developing AI tools to “uphold a Christian worldview” and fight “the censorship tools of the Regime.” But even users who aren’t motivated by ideology will have their impact. Clarkesworld, a publisher of sci-fi short stories, temporarily stopped taking submissions last month, because it was being spammed by AI-generated stories—the result of influencers promoting ways to use the technology to “get rich quick,” the magazine’s editor told The Guardian.

This is a moment of immense peril … Read More

#nlp, #fake

Worldcoin, co-founded by Sam Altman, is betting the next big thing in AI is proving you are human

Fake virtual identities are nothing new. The ability to so easily create them has been both a boon for social media platforms — more “users” — and a scourge, tied as they are to the spread of conspiracy theories, distorted discourse and other societal ills.

Still, Twitter bots are nothing compared with what the world is about to experience, as any time spent with ChatGPT illustrates. Flash forward a few years and it will be impossible to know if someone is communicating with another mortal or a neural network.

Sam Altman knows this. Altman is the co-founder and the CEO of ChatGPT parent OpenAI and has long had more visibility than most into what’s around the corner. It’s why more than three years ago, he conceived of a new company that could serve first and foremost as proof-of-personhood. Called Worldcoin, its three-part mission — to create a global ID, a global currency and an app that enables payment, purchases and transfers using its own token, along with other digital assets and traditional currencies — is as ambitious as it is technically complicated, but the opportunity is also vast. Read More

#fake

How I Broke Into a Bank Account With an AI-Generated Voice

Banks in the U.S. and Europe tout voice ID as a secure way to log into your account. I proved it’s possible to trick such systems with free or cheap AI-generated voices.

The bank thought it was talking to me; the AI-generated voice certainly sounded the same.

On Wednesday, I phoned my bank’s automated service line. To start, the bank asked me to say in my own words why I was calling. Rather than speak out loud, I clicked a file on my nearby laptop to play a sound clip: “check my balance,” my voice said. But this wasn’t actually my voice. It was a synthetic clone I had made using readily available artificial intelligence technology.

“Okay,” the bank replied. It then asked me to enter or say my date of birth as the first piece of authentication. After typing that in, the bank said “please say, ‘my voice is my password.’” 

Again, I played a sound file from my computer. “My voice is my password,” the voice said. The bank’s security system spent a few seconds authenticating the voice. 

“Thank you,” the bank said. I was in. Read More

#audio, #fake

Could ChatGPT supercharge false narratives?

Many warn of the tool’s potential to be a misinformation superspreader, capable of instantly producing news articles, blogs and political speeches.

ChatGPT, a new artificial intelligence application by OpenAI, has captured the imagination of the internet. Some have suggested it’s the largest technological advancement in modern history. In a recent interview, Noam Chomsky called it “basically high tech plagiarism.” Others have suggested large language models like ChatGPT spell the end for Google search, because they eliminate the user process of filtering through multiple websites to access digestible information.

The technology works by sifting through the internet, accessing vast quantities of information, processing it, and using artificial intelligence to generate new content from user prompts. Users can ask it to produce almost any kind of text-based content.

Given its clear creative power, many are warning of ChatGPT’s potential to be a misinformation superspreader, capable of instantly producing news articles, blogs, eulogies and political speeches in the style of particular politicians, writing whatever the user desires. It’s not hard to see how AI-powered bot accounts on social media could become virtually indistinguishable from humans with just slight advancements. Read More

#chatbots, #fake

The People Onscreen Are Fake. The Disinformation Is Real.

Read More

#fake, #videos

OpenAI releases tool to detect AI-generated text, including from ChatGPT

After telegraphing the move in media appearances, OpenAI has launched a tool that attempts to distinguish between human-written and AI-generated text — like the text produced by the company’s own ChatGPT and GPT-3 models. The classifier isn’t particularly accurate — its success rate is around 26%, OpenAI notes — but OpenAI argues that it, when used in tandem with other methods, could be useful in helping prevent AI text generators from being abused.

“The classifier aims to help mitigate false claims that AI-generated text was written by a human. However, it still has a number of limitations — so it should be used as a complement to other methods of determining the source of text instead of being the primary decision-making tool,” an OpenAI spokesperson told TechCrunch via email. “We’re making this initial classifier available to get feedback on whether tools like this are useful, and hope to share improved methods in the future.” Read More

#chatbots, #fake

Researchers fear Microsoft’s ‘dangerous’ new AI voice technology

According to ArsTechnica, Microsoft has developed an AI system that is capable of using machine learning to accurately mimic the voice of anyone, complete with novel, generated sentences, based on just three seconds of audio input.

… According to the report, Microsoft engineers know this technology could be dangerous in the wrong hands, being used to create malicious “deepfakes.” A system that convincingly fakes people’s voices could do everything from discrediting celebrities or politicians with fake racist quotes, to discrediting a former spouse in a custody dispute. It could even be used to create virtual pornography of a person without their consent, or be used in wire fraud by impersonating a CEO to trick companies into transferring their money. Read More

#audio, #fake

Deepfake Text Detector Tool GPTZero Spots AI Writing

A new tool is attempting to spot when text is written by ChatGPT and other generative AI engines. Princeton student and former open source investigator for BBC Africa Eye Edward Tian created GPTZero to identify deepfake text, a subject attracting a growing amount of interest in the academic and business world as the debate over how to respond to the potential misuse of AI continues.

Tian’s app processes submitted text for indicators of AI origins like randomness and complexity in how it is written, technically referred to as “perplexity and burstiness.” GPTZero was popular enough to almost immediately crash the hosting website, but you can play with it online here. … Voicebot ran multiple tests of GPTZero using six different generative AI tools, including ChatGPT, a few GPT-3 derived tools, and AI21. Tian’s creation caught the AI-generated text every time and correctly identified text written by a human in more than a dozen cases. Tian doesn’t have enough data to measure accuracy yet, though he said he is working on publishing one. Not bad for an app thrown together on New Year’s Eve. Read More

#fake, #nlp

AI-generated fake faces have become a hallmark of online influence operationsAI-generated fake faces have become a hallmark of online influence operations

Fake accounts on social media are increasingly likely to sport fake faces.

Facebook parent company Meta says more than two-thirds of the influence operations it found and took down this year used profile pictures that were generated by a computer.

As the artificial intelligence behind these fakes has become more widely available and better at creating life-like faces, bad actors are adapting them for their attempts to manipulate social media networks. Read More

#fake

OpenAI’s attempts to watermark AI text hit limits

It’s proving tough to reign in systems like ChatGPTOpenAI’s attempts to watermark AI text hit limits
It’s proving tough to reign in systems like ChatGPTIt’s proving tough to reign in systems like ChatGPT

Did a human write that, or ChatGPT? It can be hard to tell — perhaps too hard, its creator OpenAI thinks, which is why it is working on a way to “watermark” AI-generated content.

In a lecture at the University of Texas at Austin, computer science professor Scott Aaronson, currently a guest researcher at OpenAI, revealed that OpenAI is developing a tool for “statistically watermarking the outputs of a text [AI system].” Whenever a system — say, ChatGPT — generates text, the tool would embed an “unnoticeable secret signal” indicating where the text came from. Read More

#fake