Six ways that AI could change politics

ChatGPT was released just nine months ago, and we are still learning how it will affect our daily lives, our careers, and even our systems of self-governance.

But when it comes to how AI may threaten our democracy, much of the public conversation lacks imagination. People talk about the danger of campaigns that attack opponents with fake images (or fake audio or video) because we already have decades of experience dealing with doctored images. We’re on the lookout for foreign governments that spread misinformation because we were traumatized by the 2016 US presidential election. And we worry that AI-generated opinions will swamp the political preferences of real people because we’ve seen political “astroturfing”—the use of fake online accounts to give the illusion of support for a policy—grow for decades. — Read More

#ethics

AI resurrection of Brazilian singer for car ad sparks joy and ethical worries

Beloved musician Elis Regina died aged 36 in 1982 but a new Volkswagen commercial shows her duetting with her daughter

The premature death in 1982 of one of Brazil’s most treasured musicians left her homeland reeling. “Brazil without Elis,” mourned one front page after the legendary singer Elis Regina unexpectedly died at the age of 36.

So when Elis Regina recently re-emerged, performing a soul-stirring duet with her daughter, the Grammy-winning singer Maria Rita, there were similarly charged scenes of catharsis and nostalgia. — Read More

#ethics, #fake

How a tiny company with few rules is making fake images go mainstream

Midjourney, the year-old firm behind recent fake visuals of Trump and the pope, illustrates the lack of oversight accompanying spectacular strides in AI

The AI image generator Midjourney has quickly become one of the internet’s most eye-catching tools, creating realistic-looking fake visuals of former president Donald Trump being arrested and Pope Francis wearing a stylish coat with the aim of “expanding the imaginative powers of the human species.”

But the year-old company, run out of San Francisco with only a small collection of advisers and engineers, also has unchecked authority to determine how those powers are used.  Read More

#ethics

A Face Recognition Site Crawled the Web for Dead People’s Photos

PimEyes appears to have scraped a major ancestry website for pics, without permission. Experts fear the images could be used to identify living relatives.

Finding out Taylor Swift was her 11th cousin twice-removed wasn’t even the most shocking discovery Cher Scarlett made while exploring her family history. “There’s a lot of stuff in my family that’s weird and strange that we wouldn’t know without Ancestry,” says Scarlett, a software engineer and writer based in Kirkland, Washington. “I didn’t even know who my mum’s paternal grandparents were.”

Ancestry.com isn’t the only site that Scarlett checks regularly. In February 2022, the facial recognition search engine PimEyes surfaced non-consensual explicit photos of her at age 19, reigniting decades-old trauma. She attempted to get the pictures removed from the platform, which uses images scraped from the internet to create biometric “faceprints” of individuals. Since then, she’s been monitoring the site to make sure the images don’t return.

In January, she noticed that PimEyes was returning pictures of children that looked like they came from Ancestry.com URLs. As an experiment, she searched for a grayscale version of one of her own baby photos. It came up with a picture of her own mother, as an infant, in the arms of her grandparents—taken, she thought, from an old family photo that her mother had posted on Ancestry. Searching deeper, Scarlett found other images of her relatives, also apparently sourced from the site. They included a black-and-white photo of her great-great-great-grandmother from the 1800s, and a picture of Scarlett’s own sister, who died at age 30 in 2018. The images seemed to come from her digital memorial, Ancestry, and Find a Grave, a cemetery directory owned by Ancestry.

PimEyes, Scarlett says, has scraped images of the dead to populate its database. By indexing their facial features, the site’s algorithms can help those images identify living people through their ancestral connections, raising privacy and data protection concerns, as well as ethical ones.

Read More

#image-recognition, #ethics

How to create, release, and share generative AI responsibly

A group of 10 companies, including OpenAI, TikTok, Adobe, the BBC, and the dating app Bumble, have signed up to a new set of guidelines on how to build, create, and share AI-generated content responsibly. 

The recommendations call for both the builders of the technology, such as OpenAI, and creators and distributors of digitally created synthetic media, such as the BBC and TikTok, to be more transparent about what the technology can and cannot do, and to disclose when people might be interacting with this type of content. 

The voluntary recommendations were put together by the Partnership on AI (PAI), an AI research nonprofit, in consultation with over 50 organizations. PAI’s partners include big tech companies as well as academic, civil society, and media organizations. The first 10 companies to commit to the guidance are Adobe, BBC, CBC/Radio-Canada, Bumble, OpenAI, TikTok, Witness, and synthetic-media startups Synthesia, D-ID, and Respeecher. 

“We want to ensure that synthetic media is not used to harm, disempower, or disenfranchise but rather to support creativity, knowledge sharing, and commentary,” says Claire Leibowicz, PAI’s head of AI and media integrity.  Read More

#ethics

AI-generated Seinfeld parody banned on Twitch over transphobic standup bit

Nothing, Forever, a 24/7 show based on popular sitcom, will be offline for 14 days as makers blame technical glitch

An AI-generated Seinfeld show has been banned from the streaming platform Twitch for at least 14 days after a transphobic and homophobic standup bit aired during the show.

… Mimicking Seinfeld, the AI stream opens up with its character Larry performing a standup routine at the show’s beginning.

But during a stream on Sunday night, Larry made a series of homophobic and transphobic remarks during a standup bit, according to a clip on LiveStreamFails.com. Read More

#ethics

In AI arms race, ethics may be the first casualty

As the tech world embraces ChatGPT and other generative AI programs, the industry’s longstanding pledges to deploy AI responsibly could quickly be swamped by beat-the-competition pressures.

Why it matters: Once again, tech’s leaders are playing a game of “build fast and ask questions later” with a new technology that’s likely to spark profound changes in society.

  • Social media started two decades ago with a similar rush to market. First came the excitement — later, the damage and regrets.
Read More

#ethics

Teaching In The Age Of AI Means Getting Creative

Alarm bells seemed to sound in teachers’ lounges across America late last year with the debut of ChatGPT — an AI chatbot that was both easy to use and capable of producing dialogue-like responses, including longer-form writing and essays. Some writers and educators went so far as to even forecast the death of student papers. However, not everyone was convinced it was time to panic. Plenty of naysayers pointed to the bot’s unreliable resultsfactual inaccuracies and dull tone, and insisted that the technology wouldn’t replace real writing.

Indeed, ChatGPT and similar AI systems are being used in realms beyond education, but classrooms seem to be where fears about the bot’s misuse — and ideas to adapt alongside evolving technology — are playing out first. The realities of ChatGPT are forcing professors to take a long look at today’s teaching methods and what they actually offer to students. Current types of assessment, including the basic essays ChatGPT can mimic, may become obsolete. But instead of branding the AI as a gimmick or threat, some educators say this chatbot could end up recalibrating the way they teach, what they teach and why they teach it.  Read More

#chatbots, #ethics

Elon Musk Has Fired Twitter’s ‘Ethical AI’ Team

NOT LONG AFTER Elon Musk announced plans to acquire Twitter last March, he mused about open sourcing “the algorithm” that determines how tweets are surfaced in user feeds so that it could be inspected for bias.

His fans—as well as those who believe the social media platform harbors a left-wing bias—were delighted.

But today, as part of an aggressive plan to trim costs that involves firing thousands of Twitter employees, Musk’s management team cut a team of artificial intelligence researchers who were working toward making Twitter’s algorithms more transparent and fair. Read More

#ethics

Governing artificial intelligence in China and the European Union: Comparing aims and promoting ethical outcomes

In this article, we compare the artificial intelligence strategies of China and the European Union, assessing the key similarities and differences regarding what the high-level aims of each governance strategy are, how the development and use of AI is promoted in the public and private sectors, and whom these policies are meant to benefit. We characterize China’s strategy by its primary focus on fostering innovation and a more recent emphasis on “common prosperity,” and the EU’s on promoting ethical outcomes through protecting fundamental rights. Building on this comparative analysis, we consider the areas where the EU and China could learn from and improve upon each other’s approaches to AI governance to promote more ethical outcomes. We outline policy recommendations for both European and Chinese policymakers that would support them in achieving this aim. Read More

#china-ai, #ethics