An Indigenous Perspective on Generative AI

Earlier this month, Getty Images, one of the world’s most prominent suppliers of editorial photography, stock images, and other forms of media, announced that it had commenced legal proceedings in the High Court of Justice in London against Stability AI, a British startup firm that says it builds AI solutions using “collective intelligence,” claiming Stability AI infringed on Getty’s intellectual property rights by including content owned or represented by Getty Images in its training data. Getty says Stability AI unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by Getty Images without a license, which the company says is to the detriment of the content’s creators. The notion at the heart of Getty’s assertion– that generative AI tools like Stable Diffusion and OpenAI’s DALLE-2 are in fact exploiting the creators of the images their models are trained on– could have significant implications for the field. 

Earlier this month I attended a symposium on Existing Law and Extended Reality, hosted at Stanford Law School. There, I met today’s guest, Michael Running Wolf, who brings a unique perspective to questions related to AI and ownership, as a former Amazon software engineer, a PhD student in computer science at McGill University, and as a Northern Cheyenne man intent on preserving the language and culture of native people. Read More

#gans, #podcasts, #chatbots, #nlp

ChatGPT: Netscape Moment or Nothing Really Original

As the sudden explosion of public interest in ChatGPT continues to excite millions, we ask: Is this the tipping point for machine-driven conversation (and more)? Is ChatGPT the Netscape of our time?

In Fortune’s The inside story of ChatGPT: How OpenAI founder Sam Altman built the world’s hottest technology with billions from Microsoft, author Jeremy Kahn helpfully explains OpenAI’s history, structure, financing, and much more — at 6K words, the article covers a lot of territory. Kahn cuts straight to The Big Moment scenario in his opening paragraph [emphasis mine]:

“A few times in a generation, a product comes along that catapults a technology from the fluorescent gloom of engineering department basements, the fetid teenage bedrooms of nerds, and the lonely man caves of hobbyists — into something that your great-aunt Edna knows how to use. There were web browsers as early as 1990. But it wasn’t until Netscape Navigator came along in 1994 that most people discovered the internet. There were MP3 players before the iPod debuted in 2001, but they didn’t spark the digital music revolution. There were smartphones before Apple dropped the iPhone in 2007 too — but before the iPhone, there wasn’t an app for that.” Read More

#chatbots, #nlp

The generative AI revolution has begun—how did we get here?

A new class of incredibly powerful AI models has made recent breakthroughs possible.

Progress in AI systems often feels cyclical. Every few years, computers can suddenly do something they’ve never been able to do before. “Behold!” the AI true believers proclaim, “the age of artificial general intelligence is at hand!” “Nonsense!” the skeptics say. “Remember self-driving cars?”

The truth usually lies somewhere in between.

We’re in another cycle, this time with generative AI. Media headlines are dominated by news about AI art, but there’s also unprecedented progress in many widely disparate fields. Everything from videos to biology, programming, writing, translation, and more is seeing AI progress at the same incredible pace. Read More

#gans

#nlp

A Skeptical Take on the A.I. Revolution 

The year 2022 was jam-packed with advances in artificial intelligence, from the release of image generators like DALL-E 2 and text generators like Cicero to a flurry of developments in the self-driving car industry. And then, on November 30, OpenAI released ChatGPT, arguably the smartest, funniest, most humanlike chatbot to date.

In the weeks since, ChatGPT has become an internet sensation. If you’ve spent any time on social media recently, you’ve probably seen screenshots of it describing Karl Marx’s theory of surplus value in the style of a Taylor Swift song or explaining how to remove a sandwich from a VCR in the style of the King James Bible. There are hundreds of examples like that.

But amid all the hype, I wanted to give voice to skepticism: What is ChatGPT actually doing? Is this system really as “intelligent” as it can sometimes appear? And what are the implications of unleashing this kind of technology at scale? Read More

#chatbots, #nlp, #podcasts

I outsourced my memory to AI for 3 weeks

On a recent late afternoon, I was having trouble remembering. My browsing history for the day suggested I’d read over a dozen news articles, numerous Slack messages, plenty of Twitter threads, and a bunch of notes for my next assignment. Yet, somehow, I couldn’t recall much of it. I remembered some vague contours of the content I had consumed but lacked the details.

That afternoon wasn’t particularly special — a few days later, I struggled to recollect the details of a lengthy COVID story I had read during a conversation with a friend. These instances weren’t some crises of memory, nor were they due to a head injury. I just had too much rattling around in my brain. No matter what or how much I read online, my mind can’t help but forget it shortly after. I don’t blame my brain, either. Most people consume an overwhelming volume of text every day — hundreds of thousands of words — so it’s no surprise that our memories struggle to retain more than a few scant details. “Humans have worse memories than we think we do, and memory for text, in general, isn’t great,” Virginia Clinton-Lisell, an associate professor of educational psychology at the University of North Dakota, told me.

… Heyday, which bills itself as an AI-memory assistant, promises to fix the two key challenges I’ve faced with reading-list tools: it demands little to no effort from me and aims to help me remember things better. Instead of simply cataloging where I read something, it promised to help me recall what I’ve been reading. In the three weeks I spent with the app, I found it was effective at helping me remember things, but it comes with a catch: Using a memory tool like this has the potential to make your biological memory worse over time.   Read More

#nlp

What Happens When AI Has Read Everything?

The dream of an artificial mind may never become a reality if AI runs out of quality prose to ingest—and there isn’t much left.

Artificial intelligence has in recent years proved itself to be a quick study, although it is being educated in a manner that would shame the most brutal headmaster. Locked into airtight Borgesian libraries for months with no bathroom breaks or sleep, AIs are told not to emerge until they’ve finished a self-paced speed course in human culture. On the syllabus: a decent fraction of all the surviving text that we have ever produced.

When AIs surface from these epic study sessions, they possess astonishing new abilities. People with the most linguistically supple minds—hyperpolyglots—can reliably flip back and forth between a dozen languages; AIs can now translate between more than 100 in real time. They can churn out pastiche in a range of literary styles and write passable rhyming poetry. DeepMind’s Ithaca AI can glance at Greek letters etched into marble and guess the text that was chiseled off by vandals thousands of years ago. Read More

#nlp

Company creates 2 artificial intelligence interns: ‘They are hustling and grinding’

Codeword created two interns to work in editorial and engineering.

Artificial intelligence isn’t just making inroads in technology. Soon, AI may replace human beings in jobs as evidenced by one company that has created two AI interns.

Kyle Monson, co-founder of the digital marketing company Codeword, appeared on ABC News’ daily podcast “Start Here” to talk about the creation of AI interns Aiden and Aiko, who will be assisting in editorial and engineering. Their creation comes amid the sensation of the artificial intelligence-driven program ChatGPT, which has gone viral for responding to user prompts, utilizing Shakespeare and poetry in their efforts to recreate human interaction.

Monson spoke about the implications of these digital hires that mirror humans and if there is a potential to erase human intelligence. Read More

#chatbots, #nlp

Abstracts written by ChatGPT fool scientists

Researchers cannot always differentiate between AI-generated and original abstracts.

An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December1. Researchers are divided over the implications for science.

“I am very worried,” says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and was not involved in the research. “If we’re now in a situation where the experts are not able to determine what’s true or not, we lose the middleman that we desperately need to guide us through complicated topics,” she adds Read More

#chatbots, #nlp

Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods

Advances in natural language generation (NLG) have resulted in machine generated text that is increasingly difficult to distinguish from human authored text. Powerful open-source models are freely available, and user-friendly tools democratizing access to generative models are proliferating. The great potential of state-of-the-art NLG systems is tempered by the multitude of avenues for abuse. Detection of machine generated text is a key countermeasure for reducing abuse of NLG models, with significant technical challenges and numerous open problems. We provide a survey that includes both 1) an extensive analysis of threat models posed by contemporary NLG systems, and 2) the most complete review of machine generated text detection methods to date. This survey places machine generated text within its cybersecurity and social context, and provides strong guidance for future work addressing the most critical threat models, and ensuring detection systems themselves demonstrate trustworthiness through fairness, robustness, and accountability. Read More

#adversarial, #chatbots, #nlp

What ChatGPT Could Mean for the Future of Artificial Intelligence

Read More

#nlp, #videos