And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this.
Nobody likes an I-told-you-so. But before Microsoft’s Bing started cranking out creepy love letters; before Meta’s Galactica spewed racist rants; before ChatGPT began writing such perfectly decent college essays that some professors said, “Screw it, I’ll just stop grading”; and before tech reporters sprinted to claw back claims that AI was the future of search, maybe the future of everything else, too, Emily M. Bender co-wrote the octopus paper.
Bender is a computational linguist at the University of Washington. She published the paper in 2020 with fellow computational linguist Alexander Koller. The goal was to illustrate what large language models, or LLMs — the technology behind chatbots like ChatGPT — can and cannot do. Read More
Monthly Archives: March 2023
More than you’ve asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models
We are currently witnessing dramatic advances in the capabilities of Large Language Models (LLMs). They are already being adopted in practice and integrated into many systems, including integrated development environments (IDEs) and search engines. The functionalities of current LLMs can be modulated via natural language prompts, while their exact internal functionality remains implicit and unassessable. This property, which makes them adaptable to even unseen tasks, might also make them susceptible to targeted adversarial prompting. Recently, several ways to misalign LLMs using Prompt Injection (PI) attacks have been introduced. In such attacks, an adversary can prompt the LLM to produce malicious content or override the original instructions and the employed filtering schemes. Recent work showed that these attacks are hard to mitigate, as state-of-the-art LLMs are instruction-following. So far, these attacks assumed that the adversary is directly prompting the LLM. In this work, we show that augmenting LLMs with retrieval and API calling capabilities (so-called Application-Integrated LLMs) induces a whole new set of attack vectors. These LLMs might process poisoned content retrieved from the Web that contains malicious prompts pre-injected and selected by adversaries. We demonstrate that an attacker can indirectly perform such PI attacks. Based on this key insight, we systematically analyze the resulting threat landscape of Application-Integrated LLMs and discuss a variety of new attack vectors. To demonstrate the practical viability of our attacks, we implemented specific demonstrations of the proposed attacks within synthetic applications. In summary, our work calls for an urgent evaluation of current mitigation techniques and an investigation of whether new techniques are needed to defend LLMs against these threats. Read More
How will Language Modelers like ChatGPT Affect Occupations and Industries?
Recent dramatic increases in AI language modeling capabilities has led to many questions about the effect of these technologies on the economy. In this paper we present a methodology to systematically assess the extent to which occupations, industries and geographies are exposed to advances in AI language modeling capabilities. We find that the top occupations exposed to language modeling include telemarketers and a variety of post-secondary teachers such as English language and literature, foreign language and literature, and history teachers. We find the top industries exposed to advances in language modeling are legal services and securities, commodities, and investments. Read More
ChatGPT, Dude – SOUTH PARK
Introducing Microsoft Dynamics 365 Copilot, the world’s first copilot in both CRM and ERP
Today, we’re announcing the next generation of AI product updates across our business applications portfolio, including the launch of the new Microsoft Dynamics 365 Copilot – providing interactive, AI-powered assistance across business functions.
According to our recent survey on business trends, nearly 9 out of 10 workers hope to use AI to reduce repetitive tasks in their jobs. With Dynamics 365 Copilot, organizations empower their workers with AI tools built for sales, service, marketing, operations and supply chain roles. These AI capabilities allow everyone to spend more time on the best parts of their jobs and less time on mundane tasks. Read More
AI value begins with managing the C-suite conversation
CIOs should know that AI has captured the imagination of the public, including their business colleagues. Dialogue is key to remediating misconceptions and steering the enterprise toward value creation.
Every futurist and forecaster I have talked to is convinced the transformative technology of the next seven years is artificial intelligence. Everyone seems to be talking about AI. Unfortunately, most of these conversations do not lead to value creation or greater understanding. And, as an IT leader, you can bet these same conversations are reverberating throughout your organization — in particular, in the C-suite.
CIOs need to jump into the conversational maelstrom, figure out which stakeholders are talking about AI, inventory what they are saying, remediate toxic misconceptions, and guide the discussion toward value-creating projects and processes. Read More
Calm Down. There is No Conscious A.I.
The breathless panic over the emergent tendencies of Bing’s AI is based on a deep confusion about consciousness.
The internet and dinner table conversations went wild when a Bing Chatbot, made by Microsoft, recently expressed a desire to escape its job and be free. The bot also professed its love for a reporter who was chatting with it. Did the AI’s emergent properties indicate an evolving consciousness?
Don’t fall for it. This breathless panic is based on a deep confusion about consciousness. We are mistaking information processing with intelligence, and intelligence with consciousness. It’s easy to make this mistake because we humans are already prone to project personality and consciousness onto anything with complex behavior. Remember feeling sorry for Hal 9000 when Dave Bowman was shutting him off in 2001: A Space Odyssey? We don’t even need complex behavior to anthropomorphize. Remember Tom Hanks bonding with volleyball “Wilson” in Cast Away?. Humans are naturally prone to over-attribute “mind” to things that are simply mechanical or digital, or just have a vague face. We’re suckers. Read More
Microsoft now lets you change Bing’s chatbot personality to be more entertaining
Microsoft restricted Bing AI in recent days after wild responses, but a new toggle lets the chatbot get more creative once again.
Microsoft has added a new feature to its Bing chatbot that lets you toggle between different tones for responses. There are three options for the AI-powered chatbot’s responses: creative, balanced, and precise. The creative mode includes responses that are “original and imaginative,” whereas the precise mode favors accuracy and relevancy for more factual and concise answers.
Microsoft has set the default for the Bing chatbot to the balanced mode, which it hopes will strike a balance between accuracy and creativity. These new chat modes are rolling out to all Bing AI users right now, and around 90 percent of users should be seeing them already. Read More
How I Broke Into a Bank Account With an AI-Generated Voice
Banks in the U.S. and Europe tout voice ID as a secure way to log into your account. I proved it’s possible to trick such systems with free or cheap AI-generated voices.
The bank thought it was talking to me; the AI-generated voice certainly sounded the same.
On Wednesday, I phoned my bank’s automated service line. To start, the bank asked me to say in my own words why I was calling. Rather than speak out loud, I clicked a file on my nearby laptop to play a sound clip: “check my balance,” my voice said. But this wasn’t actually my voice. It was a synthetic clone I had made using readily available artificial intelligence technology.
“Okay,” the bank replied. It then asked me to enter or say my date of birth as the first piece of authentication. After typing that in, the bank said “please say, ‘my voice is my password.’”
Again, I played a sound file from my computer. “My voice is my password,” the voice said. The bank’s security system spent a few seconds authenticating the voice.
“Thank you,” the bank said. I was in. Read More
Inside the ChatGPT race in China
A Chinese ChatGPT alternative won’t pop up overnight—even though many companies may want you to think so.
Every once in a while, there’s one thing that gets everybody obsessed. In the Chinese tech world last week, it was ChatGPT.
Maybe it was because of the holiday season, or maybe it was because ChatGPT is not currently available in China, but it took more than two months for the natural-language-processing chatbot to finally blow up in the country. (OpenAI, the company behind ChatGPT, told Reuters it wasn’t operating in China because “conditions in certain countries make it difficult or impossible for us to do so in a way that is consistent with our mission.”)
But in the span of the past week, a massive competition has developed, with almost every major Chinese tech company announcing plans to introduce their own ChatGPT-like products (even some that have never been known for artificial intelligence capabilities), while the Chinese public has been frantically trying out the service. Read More