This Content is for Human Consumption Only

ChatGPT has subverted everyone’s predictions on automation. Just a few years ago, it seemed most likely that the manual, boring, and rote jobs would be automated—but in the presence of GPT and the other newest gargantuan deep learning models like DALL-E, it seems more likely that writers, artists, and programmers are the most vulnerable to displacement. Everyone’s freaking out about it, including me, except mine is more of a cynical freak out: I don’t want to live in a world where AI content is ubiquitous and human content is sparse and poorly incentivized—if only because the professions of a writer, artist, programmer etc. are some of the most fulfilling vocations out there. If the technological trend continues, we’re facing a future world where intellectual work no longer exists. This is the worst imaginable end-stage capitalism dystopia, in which the only ways to make money are the grueling physical jobs like nursing and commercial kitchens (if you work in a field like that, you have my deepest respect). Read More

#vfx

Geoffrey Hinton tells us why he’s now scared of the tech he helped build

“I have suddenly switched my views on whether these things are going to be more intelligent than us.”

I met Geoffrey Hinton at his house on a pretty street in north London just four days before the bombshell announcement that he is quitting Google. Hinton is a pioneer of deep learning who helped develop some of the most important techniques at the heart of modern artificial intelligence, but after a decade at Google, he is stepping down to focus on new concerns he now has about AI.  

Stunned by the capabilities of new large language models like GPT-4, Hinton wants to raise public awareness of the serious risks that he now believes may accompany the technology he ushered in.    Read More

#chatbots, #singularity

New AI Music

Even more AI songs! Check ’em Out!.

#audio

Translate with a cloned voice

  1. Grab an openai api key from here and add it to your .env file
  2. Grab an ElevenLabs api key from here and add it to your .env file
  3. Clone a voice with ElevenLabs and add the model id to your .env file
  4. Hit npm install to grab the necessary packages
  5. Run npm run dev to start your server on http://localhost:3000
Voila! And Awa a ay You Go! — Read More

#chatbots

Undercover in the metaverse

Human moderators in the metaverse are proving essential to digital safety

I recently published a story about a new kind of job that’s becoming essential at the frontier of the internet: the role of metaverse content cop. Content moderators in the metaverse go undercover into 3D worlds through a VR headset and interact with users to catch bad behavior in real time. It all sounds like a movie, and in some ways it literally is. But despite looking like a cartoon world, the metaverse is populated by very real people who can do bad things that have to be caught in the moment. 

I chatted with Ravi Yekkanti, who works for a third-party content moderation company called WebPurify that provides services to metaverse companies. Ravi moderates these environments and trains others to do the same. He told me he runs into bad behavior every day, but he loves his job and takes pride in how important it is. We get into how his job works in my story this week, but there was so much more fascinating detail to our conversation than I could get into in that format, and I wanted to share the rest of it with you here. Read More

#metaverse

You Are Grimes Now: Inside Music’s Weird AI Future

Grimes is allowing anyone and everyone to use AI models of her voice — and she’ll split royalties with you, 50/50. Her manager, Daouda Leonard, tells us why they think they’ve found the future of music

WHEN THE ANONYMOUS songwriter/producer Ghostwriter recently dropped “Heart on My Sleeve,” a song built around the AI-cloned voices of Drake and The Weeknd, Universal Music Group moved instantly to remove it from streaming services. But one artist has reacted very differently to the emerging technology. Grimes, whose last album was 2020’s Miss Anthropocene, announced via Twitter on April 23 that anyone can use AI models of her voice “without penalty,” and that she’d split royalties 50/50 with the creator of any successful song doing so. It wasn’t an idle offer; this weekend, she put up an online platform at elf.tech that allows users to post Grimes-infused songs on Spotify and other streaming services under the name GrimesAI-1. Read More

#audio, #chatbots

A Brain Scanner Combined with an AI Language Model Can Provide a Glimpse into Your Thoughts

New technology gleans the gist of stories a person hears while laying in a brain scanner

Functional magnetic resonance imaging (fMRI) captures coarse, colorful snapshots of the brain in action. While this specialized type of magnetic resonance imaging has transformed cognitive neuroscience, it isn’t a mind-reading machine: neuroscientists can’t look at a brain scan and tell what someone was seeing, hearing or thinking in the scanner.

But gradually scientists are pushing against that fundamental barrier to translate internal experiences into words using brain imaging. This technology could help people who can’t speak or otherwise outwardly communicate such as those who have suffered strokes or are living with amyotrophic lateral sclerosis. Current brain-computer interfaces require the implantation of devices in the brain, but neuroscientists hope to use non-invasive techniques such as fMRI to decipher internal speech without the need for surgery

Now researchers have taken a step forward by combining fMRI’s ability to monitor neural activity with the predictive power of artificial intelligence language models. The hybrid technology has resulted in a decoder that can reproduce, with a surprising level of accuracy, the stories that a person listened to or imagined telling in the scanner. The decoder could even guess the story behind a short film that someone watched in the scanner, though with less accuracy. Read More

#chatbots, #human

‘Godfather of A.I.’ leaves Google after a decade to warn society of technology he’s touted

Geoffrey Hinton, known as “The Godfather of AI,” received his Ph.D. in artificial intelligence 45 years ago and has remained one of the most respected voices in the field.

For the past decade Hinton worked part-time at Google, between the company’s Silicon Valley headquarters and Toronto. But he has quit the internet giant, and he told The New York Times that he’ll be warning the world about the potential threat of AI, which he said is coming sooner than he previously thought. Read More

#singularity

Five Recommendations for Improving the “Measures for the Management of Generative AI Services (Draft for Comment)”

On April 10, the Cyberspace Administration of China announced the “Measures for the Management of Generative AI Services (Draft for Comment)”. As the world’s first draft legislation for generative AI, the consultation draft actively responds to new risks and challenges, and comprehensively regulates the research and development and utilization of generative AI to ensure the healthy and orderly development of the generative AI industry. The timely publication of the draft for comments demonstrates China’s governance philosophy of putting people first and paying equal attention to security and development, and reflects the great attention of the cybersecurity and informatization department to content security governance and responsibility distribution. At a time when all kinds of generative AI services are attracting wide attention and thousands of sails are competing for development, it will surely play an important role in building consensus and regulating guidelines. Read More

#china-ai

The first babies conceived with a sperm-injecting robot have been born

Last spring, engineers in Barcelona packed up the sperm-injecting robot they’d designed and sent it by DHL to New York City. They followed it to a clinic there, called New Hope Fertility Center, where they put the instrument back together, assembling a microscope, a mechanized needle, a tiny petri dish, and a laptop.

Then one of the engineers, with no real experience in fertility medicine, used a Sony PlayStation 5 controller to position a robotic needle. Eyeing a human egg through a camera, it then moved forward on its own, penetrating the egg and dropping off a single sperm cell. Altogether, the robot was used to fertilize more than a dozen eggs.

The result of the procedures, say the researchers, were healthy embryos—and now two baby girls, who they claim are the first people born after fertilization by a “robot.” Read More

#robotics