Superhuman: What can AI do in 30 minutes?

The thing that we have to come to grips with in a world of ubiquitous, powerful AI tools is how much it can do for us. The multiplier on human effort is unprecedented, and potentially disruptive. But this fact can often feel abstract.

So I decided to run an experiment. I gave myself 30 minutes, and tried to accomplish as much as I could during that time on a single business project. At the end of 30 minutes I would stop. The project: to market the launch a new educational game. AI would do all the work, I would just offer directions.

And what it accomplished was superhuman. I will go through the details in a moment, but, in 30 minutes it: did market research, created a positioning document, wrote an email campaign, created a website, created a logo and “hero shot” graphic, made a social media campaign for multiple platforms, and scripted and created a video. In 30 minutes. Read More

#chatbots, #augmented-intelligence

Principled Reinforcement Learning with Human Feedback from Pairwise or K-wise Comparisons

We provide a theoretical framework for Reinforcement Learning with Human Feedback (RLHF). Our analysis shows that when the true reward function is linear, the widely used maximum likelihood >estimator (MLE) converges under both the Bradley-Terry-Luce (BTL) model and the Plackett-Luce (PL) model. However, we show that when training a policy based on the learned reward model, MLE fails while a pessimistic MLE provides policies with improved performance under certain coverage assumptions. Additionally, we demonstrate that under the PL model, the true MLE and an alternative MLE that splits the K-wise comparison into pairwise comparisons both converge. Moreover, the true MLE is asymptotically more efficient. Our results validate the empirical success of existing RLHF algorithms in InstructGPT and provide new insights for algorithm design. We also unify the problem of RLHF and max-entropy Inverse Reinforcement Learning (IRL), and provide the first sample complexity bound for max-entropy IRL. Read More

#reinforcement-learning

Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI

Read More
#singularity, #videos

It’s Game Over on Vocal Deepfakes

You may recall back in October I linked to an AI-generated simulated interview between Joe Rogan and Steve Jobs. I wrote:

I also don’t buy their claim that these voices are completely generated. Most of Jobs’s lines have auditorium echo — they sound like clips copy-and-pasted. If they can really generate these voices, why doesn’t their virtual Rogan actually say Steve Jobs’s name? Send me a clip of virtual Steve Jobs saying “John Gruber is a bozo, and I tell people not to waste their time reading Daring Fireball.” Then I’ll believe it.

I neglected to follow up until now, but Ignaz Kowalczuk from ElevenLabs (the company behind Prime Voice AI) took me up on the challenge and sent me this clip:

That clip sounds noticeably stilted, but it does sound like Steve Jobs.

Now come this: a Twitter thread from John Meyer, who trained a clone of Jobs’s voice and then hooked it up to ChatGPT to generate the words. The clips he posted to Twitter are freakishly uncanny. Read More

#audio, #fake

The Age of AI and Our Human Future

Read More

#videos

Facebook accounts hijacked by new malicious ChatGPT Chrome extension

A trojanized version of the legitimate ChatGPT extension for Chrome is gaining popularity on the Chrome Web Store, accumulating over 9,000 downloads while stealing Facebook accounts.

The extension is a copy of the legitimate popular add-on for Chrome named “ChatGPT for Google” that offers ChatGPT integration on search results. However, this malicious version includes additional code that attempts to steal Facebook session cookies.

The publisher of the extension uploaded it to the Chrome Web Store on February 14, 2023, but only started promoting it using Google Search advertisements on March 14, 2023. Since then, it has had an average of a thousand installations per day. Read More

#cyber

Google and Microsoft’s chatbots are already citing one another in a misinformation shitshow

Microsoft’s Bing said Google’s Bard had been shut down after it misread a story citing a tweet sourced from a joke. It’s not a good sign for the future of online misinformation.

If you don’t believe the rushed launch of AI chatbots by Big Tech has an extremely strong chance of degrading the web’s information ecosystem, consider the following:

Right now,* if you ask Microsoft’s Bing chatbot if Google’s Bard chatbot has been shut down, it says yes, citing as evidence a news article that discusses a tweet in which a user asked Bard when it would be shut down and Bard said it already had, itself citing a comment from Hacker News in which someone joked about this happening, and someone else used ChatGPT to write fake news coverage about the event. Read More

#fake