It makes sense that LinkedIn would be the first major social network to push AI-generated content on its users. The Microsoft-owned company is weird. It’s corporate. It’s full of workfluencer posts and engagement bait that ranges in tone from management consultant bland to cheerfully psychotic. Happily, this is the same emotional spectrum on which AI tends to operate.
LinkedIn isn’t populating its feed with AI chatbots just yet, but last week began sharing “AI-powered conversation starters” with the express purpose of provoking discussion among users. These posts are “developed” with the help of LinkedIn’s editorial team and matched with human experts who can then offer their thoughts on topics like “how to create a consistent brand voice on social media” and “how to monitor the online reach of your writing.” So far, so anodyne — like the contents of an r/askmckinsey subreddit.
But this project is a milestone nevertheless, and may herald the start of a wider revolution for the web. It’s the first time — I know of — that a major social media platform has directly served users AI-generated content to keep them engaged. And in a time of social media stagnation, from Twitter’s manifold struggles to Meta’s desperate-looking pitch for paid subscriptions, it could point to the industry’s future: to the semiautomated social network. Read More
Monthly Archives: March 2023
A new era for AI and Google Workspace
For nearly 25 years, Google has built helpful products that people use every day — from Search and Maps, to Gmail and Docs in Google Workspace. AI has been transformational in building products that have earned a valued place in people’s lives. Across our productivity suite, advances in AI are already helping 3 billion users save more time with Smart Compose and Smart Reply, generate summaries for Docs, look more professional in meetings, and stay safe against malware and phishing attacks.
We’re now making it possible for Workspace users to harness the power of generative AI to create, connect, and collaborate like never before. To start, we’re introducing a first set of AI-powered writing features in Docs and Gmail to trusted testers. Read More
China’s answer to ChatGPT? Baidu shares tumble as Ernie Bot disappoints
China’s Baidu unveiled its much-awaited artificial intelligence-powered chatbot known as Ernie Bot on Thursday, but disappointed investors with its use of pre-recorded videos and the lack of a public launch, sending its shares tumbling.
The just over an hour-long presentation, which came two days after Alphabet Inc’s (GOOGL.O) Google unveiled a flurry of AI tools for its email, collaboration and cloud software, gave the world a glimpse of what could be China’s strongest rival to U.S. research lab OpenAI’s ChatGPT. Read More
OpenAI co-founder on company’s past approach to openly sharing research: ‘We were wrong’
OpenAI announced its latest language model, GPT-4, but many in the AI community were disappointed by the lack of public information. Their complaints track increasing tensions in the AI world over safety.
Yesterday, OpenAI announced GPT-4, its long-awaited next-generation AI language model. The system’s capabilities are still being assessed, but as researchers and experts pore over its accompanying materials, many have expressed disappointment at one particular feature: that despite the name of its parent company, GPT-4 is not an open AI model.
OpenAI has shared plenty of benchmark and test results for GPT-4, as well as some intriguing demos, but has offered essentially no information on the data used to train the system, its energy costs, or the specific hardware or methods used to create it. Read More
OpenAI Introduces GPT-4
OpenAI announced GPT-4, its latest milestone in scaling up deep learning.
GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks. For example, it passes a simulated bar exam with a score around the top 10% of test takers; in contrast, GPT-3.5’s score was around the bottom 10%. We’ve spent 6 months iteratively aligning GPT-4 using lessons from our adversarial testing program as well as ChatGPT, resulting in our best-ever results (though far from perfect) on factuality, steerability, and refusing to go outside of guardrails.
… We are releasing GPT-4’s text input capability via ChatGPT and the API (with a waitlist). To prepare the image input capability for wider availability, we’re collaborating closely with a single partner to start. We’re also open-sourcing OpenAI Evals, our framework for automated evaluation of AI model performance, to allow anyone to report shortcomings in our models to help guide further improvements. Read More
The Rise of A.I. Companions [Documentary]
This Changes Everything
In 2018, Sundar Pichai, the chief executive of Google — and not one of the tech executives known for overstatement — said, “A.I. is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire.”
Try to live, for a few minutes, in the possibility that he’s right. There is no more profound human bias than the expectation that tomorrow will be like today. It is a powerful heuristic tool because it is almost always correct. Tomorrow probably will be like today. Next year probably will be like this year. But cast your gaze 10 or 20 years out. Typically, that has been possible in human history. I don’t think it is now. Read More
A Face Recognition Site Crawled the Web for Dead People’s Photos
PimEyes appears to have scraped a major ancestry website for pics, without permission. Experts fear the images could be used to identify living relatives.
Finding out Taylor Swift was her 11th cousin twice-removed wasn’t even the most shocking discovery Cher Scarlett made while exploring her family history. “There’s a lot of stuff in my family that’s weird and strange that we wouldn’t know without Ancestry,” says Scarlett, a software engineer and writer based in Kirkland, Washington. “I didn’t even know who my mum’s paternal grandparents were.”
Ancestry.com isn’t the only site that Scarlett checks regularly. In February 2022, the facial recognition search engine PimEyes surfaced non-consensual explicit photos of her at age 19, reigniting decades-old trauma. She attempted to get the pictures removed from the platform, which uses images scraped from the internet to create biometric “faceprints” of individuals. Since then, she’s been monitoring the site to make sure the images don’t return.
In January, she noticed that PimEyes was returning pictures of children that looked like they came from Ancestry.com URLs. As an experiment, she searched for a grayscale version of one of her own baby photos. It came up with a picture of her own mother, as an infant, in the arms of her grandparents—taken, she thought, from an old family photo that her mother had posted on Ancestry. Searching deeper, Scarlett found other images of her relatives, also apparently sourced from the site. They included a black-and-white photo of her great-great-great-grandmother from the 1800s, and a picture of Scarlett’s own sister, who died at age 30 in 2018. The images seemed to come from her digital memorial, Ancestry, and Find a Grave, a cemetery directory owned by Ancestry.
PimEyes, Scarlett says, has scraped images of the dead to populate its database. By indexing their facial features, the site’s algorithms can help those images identify living people through their ancestral connections, raising privacy and data protection concerns, as well as ethical ones.
Why Are We Letting the AI Crisis Just Happen?
Bad actors could seize on large language models to engineer falsehoods at unprecedented scale.
New AI systems such as ChatGPT, the overhauled Microsoft Bing search engine, and the reportedly soon-to-arrive GPT-4 have utterly captured the public imagination. ChatGPT is the fastest-growing online application, ever, and it’s no wonder why. Type in some text, and instead of getting back web links, you get well-formed, conversational responses on whatever topic you selected—an undeniably seductive vision.
But the public, and the tech giants, aren’t the only ones who have become enthralled with the Big Data–driven technology known as the large language model. Bad actors have taken note of the technology as well. At the extreme end, there’s Andrew Torba, the CEO of the far-right social network Gab, who said recently that his company is actively developing AI tools to “uphold a Christian worldview” and fight “the censorship tools of the Regime.” But even users who aren’t motivated by ideology will have their impact. Clarkesworld, a publisher of sci-fi short stories, temporarily stopped taking submissions last month, because it was being spammed by AI-generated stories—the result of influencers promoting ways to use the technology to “get rich quick,” the magazine’s editor told The Guardian.
This is a moment of immense peril … Read More
Online storm erupts over AI work in Dutch museum’s ‘Girl with a Pearl Earring’ display
Mauritshuis currently has 170 works on display as part of its “My Girl with a Pearl” initiative while Vermeer’s masterpiece is on loan
The Mauritshuis museum in The Hague, Netherlands, is facing criticism for showing an image made using artificial intelligence (AI) which is inspired by Vermeer’s famous Girl with a Pearl Earring.
The work by Berlin-based Julian van Dieken, who describes himself as a “digital creator”, is one of five images out of around 3,480 submitted for the My Girl with a Pearl initiative whereby devotees of the famous painting were invited to send their own versions of the famous girl image.
The winning entries are on show at the Mauritshuis while Vermeer’s 1665 original masterpiece is on loan to the Rijksmuseum in Amsterdam (until 4 June); 170 entries are shown on a loop in a digital frame. Read More