As it’s currently imagined, the technology promises to concentrate wealth and disempower workers. Is an alternative possible?
When we talk about artificial intelligence, we rely on metaphor, as we always do when dealing with something new and unfamiliar. Metaphors are, by their nature, imperfect, but we still need to choose them carefully, because bad ones can lead us astray. For example, it’s become very common to compare powerful A.I.s to genies in fairy tales. The metaphor is meant to highlight the difficulty of making powerful entities obey your commands; the computer scientist Stuart Russell has cited the parable of King Midas, who demanded that everything he touched turn into gold, to illustrate the dangers of an A.I. doing what you tell it to do instead of what you want it to do. There are multiple problems with this metaphor, but one of them is that it derives the wrong lessons from the tale to which it refers. The point of the Midas parable is that greed will destroy you, and that the pursuit of wealth will cost you everything that is truly important. If your reading of the parable is that, when you are granted a wish by the gods, you should phrase your wish very, very carefully, then you have missed the point.
So, I would like to propose another metaphor for the risks of artificial intelligence. I suggest that we think about A.I. as a management-consulting firm, along the lines of McKinsey & Company. Firms like McKinsey are hired for a wide variety of reasons, and A.I. systems are used for many reasons, too. But the similarities between McKinsey—a consulting firm that works with ninety per cent of the Fortune 100—and A.I. are also clear. Social-media companies use machine learning to keep users glued to their feeds. In a similar way, Purdue Pharma used McKinsey to figure out how to “turbocharge” sales of OxyContin during the opioid epidemic. Just as A.I. promises to offer managers a cheap replacement for human workers, so McKinsey and similar firms helped normalize the practice of mass layoffs as a way of increasing stock prices and executive compensation, contributing to the destruction of the middle class in America. — Read More
Monthly Archives: May 2023
Google’s open-source AI tool let me play my favorite Dreamcast game with my face
Project Gameface is ready to install as a Windows app that makes gaming more accessible using only your webcam.
While Wednesday’s Google I/O event largely hyped the company’s biggest AI initiatives, the company also announced updates to the machine learning suite that powers Google Lens and Google Meet features like object tracking and recognition, gesture control, and of course, facial detection. The newest update enables app developers to, among other things, create Snapchat-like face filters and hand tracking, with the company showing off a GIF that’s definitely not a Memoji.
This update underpins a special project announced during the I/O developer keynote: an open-source accessibility application called Project Gameface, which lets you play games… with your face. During the keynote, Google played a very Wes Anderson-esque mini-documentary revealing a tragedy that prompted the company to design Gameface. — Read More
AI Claude: Introducing 100K Context WindowsAI Claude:
We’ve expanded Claude’s context window from 9K to 100K tokens, corresponding to around 75,000 words! This means businesses can now submit hundreds of pages of materials for Claude to digest and analyze, and conversations with Claude can go on for hours or even days.
The average person can read 100,000 tokens of text in ~5+ hours1, and then they might need substantially longer to digest, remember, and analyze that information. Claude can now do this in less than a minute. For example, we loaded the entire text of The Great Gatsby into Claude-Instant (72K tokens) and modified one line to say Mr. Carraway was “a software engineer that works on machine learning tooling at Anthropic.” When we asked the model to spot what was different, it responded with the correct answer in 22 seconds. — Read More
Enter PaLM 2 (New Bard): Full Breakdown – 92 Pages Read and Gemini Before GPT 5? Google I/O
Can AI actually write good fanfiction?
Since artificial intelligence-powered text-generation tools were made widely available to the public in the past few months, they’ve been heralded by some as the future of email, internet search, and content generation. But these AI-powered tools also have some clear shortcomings: They tend to be incorrect, and often generate answers that reinforce racial biases, for example. There are also serious ethical concerns about their unspecified training data.
It is not surprising that debates over using these tools have also been happening in fandom spaces. Excited fans almost immediately turned to them as a new way of exploring their favorite characters. With the right prompt, AI can spit out a few paragraphs of fic-like writing. But just as quickly, many fanfic writers began to speak out against the practice. Read More
Building Trustworthy AI
We will all soon get into the habit of using AI tools for help with everyday problems and tasks. We should get in the habit of questioning the motives, incentives, and capabilities behind them, too.
Imagine you’re using an AI chatbot to plan a vacation. Did it suggest a particular resort because it knows your preferences, or because the company is getting a kickback from the hotel chain? Later, when you’re using another AI chatbot to learn about a complex economic issue, is the chatbot reflecting your politics or the politics of the company that trained it?
For AI to truly be our assistant, it needs to be trustworthy. For it to be trustworthy, it must be under our control; it can’t be working behind the scenes for some tech monopoly. This means, at a minimum, the technology needs to be transparent. And we all need to understand how it works, at least a little bit. — Read More
Google makes its text-to-music AI public
Google today released MusicLM, a new experimental AI tool that can turn text descriptions into music. Available in the AI Test Kitchen app on the web, Android or iOS, MusicLM lets users type in a prompt like “soulful jazz for a dinner party” or “create an industrial techno sound that is hypnotic” and have the tool create several versions of the song. Read More
Paper
“Godfather of AI” Geoffrey Hinton Warns of the “Existential Threat” of AI | Amanpour and Company
Bad Actors Are Joining the AI Revolution: Here’s What We’ve Found in the Wild
Movies and TV shows have taught us to associate computer hackers with difficult tasks, detailed plots, and elaborate schemes.
What security researcher Carlos Fernández and I have recently found on open-source registries tells a different story: bad actors are favoring simplicity, effectiveness, and user-centered thinking. And to take their malicious code to the next level, they’re also adding new features assisted by ChatGPT.
Just like software-as-a-service (SaaS), part of the reason why malware-as-a-service (MaaS) offerings such as DuckLogs, Redline Stealer, and Racoon Stealer have become so popular in underground markets is that they have active customer support channels and their products tend to be slick and user-friendly. Check these boxes, fill out this form, click this button… Here’s your ready-to-use malware sample! Needless to say, these products are often built by professional cybercriminals. Read More
India’s religious AI chatbots are speaking in the voice of god — and condoning violence
Claiming wisdom based on the Bhagavad Gita, the bots frequently go way off script.
In January 2023, when ChatGPT was setting new growth records, Bengaluru-based software engineer Sukuru Sai Vineet launched GitaGPT. The chatbot, powered by GPT-3 technology, provides answers based on the Bhagavad Gita, a 700-verse Hindu scripture. GitaGPT mimics the Hindu god Krishna’s tone — the search box reads, “What troubles you, my child?”
… At least five GitaGPTs have sprung up between January and March this year, with more on the way. Experts have warned that chatbots being allowed to play god might have unintended, and dangerous, consequences. Rest of World found that some of the answers generated by the Gita bots lack filters for casteism, misogyny, and even law. Three of these bots, for instance, say it is acceptable to kill another if it is one’s dharma or duty. — Read More