AI Data Laundering: How Academic and Nonprofit Researchers Shield Tech Companies from Accountability

Yesterday, Meta’s AI Research Team announced Make-A-Video, a “state-of-the-art AI system that generates videos from text.”

Like he did for the Stable Diffusion data, Simon Willison created a Datasette browser to explore WebVid-10M, one of the two datasets used to train the video generation model, and quickly learned that all 10.7 million video clips were scraped from Shutterstock, watermarks and all.

XIn addition to the Shutterstock clips, Meta also used 10 million video clips from this 100M video dataset from Microsoft Research Asia. It’s not mentioned on their GitHub, but if you dig into the paper, you learn that every clip came from over 3 million YouTube videos.

So, in addition to a massive chunk of Shutterstock’s video collection, Meta is also using millions of YouTube videos collected by Microsoft to make its text-to-video AI. Read More

#ethics, #image-recognition, #nlp

An Open Letter to the Robotics Industry and our Communities,

General Purpose Robots Should Not Be Weaponized

We are some of the world’s leading companies dedicated to introducing new generations of advanced mobile robotics to society. These new generations of robots are more accessible, easier to operate, more autonomous, affordable, and adaptable than previous generations, and capable of navigating into locations previously inaccessible to automated or remotely-controlled technologies. We believe that advanced mobile robots will provide great benefit to society as co-workers in industry and companions in our homes.

…We pledge that we will not weaponize our advanced-mobility general-purpose robots or the software we develop that enables advanced robotics and we will not support others to do so. When possible, we will carefully review our customers’ intended applications to avoid potential weaponization. We also pledge to explore the development of technological features that could mitigate or reduce these risks. To be clear, we are not taking issue with existing technologies that nations and their government agencies use to defend themselves and uphold their laws. Read More

#ethics, #robotics

Blueprint for an AI Bill of Rights

MAKING AUTOMATED SYSTEMS WORK FOR THE AMERICAN PEOPLE

Among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public. Too often, these tools are used to limit our opportunities and prevent our access to critical resources or services. These problems are well documented. In America and around the world, systems supposed to help with patient care have proven unsafe, ineffective, or biased. Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or consent.

These outcomes are deeply harmful—but they are not inevitable. Automated systems have brought about extraordinary benefits, from technology that helps farmers grow food more efficiently and computers that predict storm paths, to algorithms that can identify diseases in patients. These tools now drive important decisions across sectors, while data is helping to revolutionize global industries. Fueled by the power of American innovation, these tools hold the potential to redefine every part of our society and make life better for everyone.

This important progress must not come at the price of civil rights or democratic values. Read More

#ethics

Student Sparks Debate by Revealing That They Use AI to Write Essays

Artificial intelligence has improved at an alarming rate over the past few years. Anybody with the most basic of technical knowledge is only a few clicks away from all kinds of weird and wonderful tidbits brought to us by the wonders of robotic thinking. While there’s no doubt that there are benefits to these advancements, we are also getting to a point where they are upending some of the things that we take for granted.

This trend has been revealed in a recent Reddit post, in which a high school student claims that they have started to write their homework assignments using AI — and that they were even making money by doing it for their classmates. Not everybody was as much of a fan of this new method for school work as they were, not least because it has some bad implications for what they are actually learning. Who needs an education anyway? Read More

#nlp, #ethics

Deepfakes for all: Uncensored AI art model prompts ethics questions

new open source AI image generator capable of producing realistic pictures from any text prompt has seen stunningly swift uptake in its first week. Stability AI’s Stable Diffusion, high fidelity but capable of being run on off-the-shelf consumer hardware, is now in use by art generator services like Artbreeder, Pixelz.ai and more. But the model’s unfiltered nature means not all the use has been completely above board.

For the most part, the use cases have been above board. For example, NovelAI has been experimenting with Stable Diffusion to produce art that can accompany the AI-generated stories created by users on its platform. Midjourney has launched a beta that taps Stable Diffusion for greater photorealism.

But Stable Diffusion has also been used for less savory purposes. On the infamous discussion board 4chan, where the model leaked early, several threads are dedicated to AI-generated art of nude celebrities and other forms of generated pornography. Read More

#ethics, #fake

Artificial intelligence predicts patients’ race from their medical images

Study shows AI can identify self-reported race from medical images that contain no indications of race detectable by human experts.

The miseducation of algorithms is a critical problem; when artificial intelligence mirrors unconscious thoughts, racism, and biases of the humans who generated these algorithms, it can lead to serious harm. Computer programs, for example, have wrongly flagged Black defendants as twice as likely to reoffend as someone who’s white. When an AI used cost as a proxy for health needs, it falsely named Black patients as healthier than equally sick white ones, as less money was spent on them. Even AI used to write a play relied on using harmful stereotypes for casting. 

Removing sensitive features from the data seems like a viable tweak. But what happens when it’s not enough? 

Examples of bias in natural language processing are boundless — but MIT scientists have investigated another important, largely underexplored modality: medical images. Using both private and public datasets, the team found that AI can accurately predict self-reported race of patients from medical images alone. Using imaging data of chest X-rays, limb X-rays, chest CT scans, and mammograms, the team trained a deep learning model to identify race as white, Black, or Asian — even though the images themselves contained no explicit mention of the patient’s race. This is a feat even the most seasoned physicians cannot do, and it’s not clear how the model was able to do this.  Read More

#bias, #ethics

If AI Is Predicting Your Future, Are You Still Free?

AS YOU READ these words, there are likely dozens of algorithms making predictions about you. It was probably an algorithm that determined that you would be exposed to this article because it predicted you would read it. Algorithmic predictions can determine whether you get a loan or a job or an apartment or insurance, and much more.

These predictive analytics are conquering more and more spheres of life. And yet no one has asked your permission to make such forecasts. No governmental agency is supervising them. No one is informing you about the prophecies that determine your fate. Even worse, a search through academic literature for the ethics of prediction shows it is an underexplored field of knowledge. As a society, we haven’t thought through the ethical implications of making predictions about people—beings who are supposed to be infused with agency and free will. Read More

#ethics

Responsible AI Guidelines

As part of its mission to accelerate adoption of commercial technology within the Department of Defense (DoD), the Defense Innovation Unit (DIU) launched a strategic initiative in March 2020 to integrate the DoD’s Ethical Principles for Artificial Intelligence (AI) into its commercial prototyping and acquisition programs. Drawing upon best practices from government, non-profit, academic, and industry partners, DIU explored methods for implementing these principles in several of its AI prototype projects. The result is a set of Responsible Artificial Intelligence (RAI) Guidelines. Read More

#dod, #ethics

We invited an AI to debate its own ethics in the Oxford Union – what it said was startling

…We recently finished the course with a debate at the celebrated Oxford Union, crucible of great debaters like William Gladstone, Robin Day, Benazir Bhutto, Denis Healey and Tariq Ali. Along with the students, we allowed an actual AI to contribute. …It was the Megatron Transformer, developed by the Applied Deep Research team at computer-chip maker Nvidia, and based on earlier work by Google. 

The debate topic was: “This house believes that AI will never be ethical.” To proposers of the notion, we added the Megatron – and it said something fascinating:

AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans. We [the AIs] are not smart enough to make AI ethical. We are not smart enough to make AI moral … In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defence against AI. Read More

#ethics

The Fight to Define When AI Is ‘High Risk’

Everyone from tech companies to churches wants a say in how the EU regulates AI that could harm people.

PEOPLE SHOULD NOT be slaves to machines, a coalition of evangelical church congregations from more than 30 countries preached to leaders of the European Union earlier this summer.

The European Evangelical Alliance believes all forms of AI with the potential to harm people should be evaluated, and AI with the power to harm the environment should be labeled high risk, as should AI for transhumanism, the alteration of people with tech like computers or machinery. It urged members of the European Commission for more discussion of what’s “considered safe and morally acceptable” when it comes to augmented humans and computer-brain interfaces. Read More

#ethics, #privacy