Why does Beijing suddenly care about AI ethics?

Did China and the US just agree on something?

This week, Chinese scientists and engineers released a code of ethics for artificial intelligence that might signal a willingness from Beijing to rethink how it uses the technology.

And while China’s government is widely criticized for using AI as a way to monitor citizens, the newly published guidelines seem remarkably similar to ethical frameworks laid out by Western companies and governments.

The Beijing AI Principles were announced last Saturday by the Beijing Academy of Artificial Intelligence (BAAI), an organization backed by the Chinese Ministry of Science and Technology and the Beijing municipal government. They spell out guiding principles for research and development in AI, including that “human privacy, dignity, freedom, autonomy, and rights should be sufficiently respected.” Read More

#china-vs-us, #ethics

The Pentagon wants your thoughts about AI but may not listen

IN FEBRUARY, THE Pentagon unveiled an expansive new artificial intelligence strategy that promised the technology would be used to enhance everything the department does, from killing enemies to treating injured soldiers. It said an Obama-era advisory board packed with representatives of the tech industry would help craft guidelines to ensure the technology’s power was used ethically.

In the heart of Silicon Valley on Thursday, that board asked the public for advice. It got an earful—from tech executives, advocacy groups, AI researchers, and veterans, among others. Many were wary of the Pentagon’s AI ambitions and urged the board to lay down rules that would subject the department’s AI projects to close controls and scrutiny. Read More

#ethics

The Growing Marketplace For AI Ethics

As companies have raced to adopt artificial intelligence (AI) systems at scale, they have also sped through, and sometimes spun out, in the ethical obstacle course AI often presents.

AI-powered loan and credit approval processes have been marred by unforeseen bias. Same with recruiting tools. Smart speakers have secretly turned on and recorded thousands of minutes of audio of their owners.

Unfortunately, there’s no industry-standard, best-practices handbook on AI ethics for companies to follow—at least not yet. Some large companies, including Microsoft and Google, are developing their own internal ethical frameworks.

A number of think tanks, research organizations, and advocacy groups, meanwhile, have been developing a wide variety of ethical frameworks and guidelines for AI. Below is a brief roundup of some of the more influential models to emerge—from the Asilomar Principles to best-practice recommendations from the AI Now Institute. Read More

#ethics

The new digital divide is between people who opt out of algorithms and people who don’t

Every aspect of life can be guided by artificial intelligence algorithms – from choosing what route to take for your morning commute, to deciding whom to take on a date, to complex legal and judicial matters such as predictive policing.

Big tech companies like Google and Facebook use AI to obtain insights on their gargantuan trove of detailed customer data. This allows them monetize users’ collective preferences through practices such as micro-targeting, a strategy used by advertisers to narrowly target specific sets of users.

In parallel, many people now trust platforms and algorithms more than their own governments and civic society. An October 2018 study suggested that people demonstrate “algorithm appreciation,” to the extent that they would rely on advice more when they think it is from an algorithm than from a human. Read More

#ethics

AI systems should be accountable, explainable, and unbiased, says EU

The EU convened a group of 52 experts who came up with seven requirements they think future AI systems should meet. They are as follows:

Human agency and oversight — AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes.

Technical robustness and safety — AI should be secure and accurate. It shouldn’t be easily compromised by external attacks (such as adversarial examples), and it should be reasonably reliable.

Privacy and data governance — Personal data collected by AI systems should be secure and private. It shouldn’t be accessible to just anyone, and it shouldn’t be easily stolen.

Transparency — Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be “understood and traced by human beings.” In other words, operators should be able to explain the decisions their AI systems make.

Diversity, non-discrimination, and fairness — Services provided by AI should be available to all, regardless of age, gender, race, or other characteristics. Similarly, systems should not be biased along these lines.

Environmental and societal well-being — AI systems should be sustainable (i.e., they should be ecologically responsible) and “enhance positive social change”

Accountability — AI systems should be auditable and covered by existing protections for corporate whistleblowers. Negative impacts of systems should be acknowledged and reported in advance. Read More

#ethics

Artificial intelligence: The EU’s 7 steps for trusty AI

Do you trust AI? If not, what would it take? The European Commission says there are seven steps to building trust in artificial intelligence. It’s published the latest findings from a high-level expert group. Read More

#ethics

The Ethics of Artificial Intelligence

Artificial intelligence is one of the buzzwords of early 21st century culture. The advent of semi-intelligent tools is rapidly transforming human life, and at the same time the nature and scope of these tools is poorly understood. There is an immediate need to develop a practical framework for building and using AI ethically. In order to do so, we must clarify the nature of the relationship between humans, the tools we use, and the tasks we perform.

While the needs and values of different cultures vary a great deal, the question of what and how we automate is the same as the question of what kind of society we want. AI poses unique dangers and also great opportunities. Navigating this complex landscape requires that we keep human outcomes central to our thinking and design. Read More

#ethics

Google cancels AI ethics board in response to outcry

This week, Vox and other outlets reported that Google’s newly created AI ethics board was falling apart amid controversy over several of the board members.

Well, it’s officially done falling apart — it’s been canceled. Google told Vox on Thursday that it’s pulling the plug on the ethics board.

The board survived for barely more than one week. Founded to guide “responsible development of AI” at Google, it would have had eight members and met four times over the course of 2019 to consider concerns about Google’s AI program. Those concerns include how AI can enable authoritarian states, how AI algorithms produce disparate outcomes, whether to work on military applications of AI, and more. But it ran into problems from the start. Read More

#ethics

Google dissolves AI ethics board just one week after forming it

Google today disclosed that it has dissolved a short-lived, external advisory board designed to monitor its use of artificial intelligence, following a week of controversy regarding the company’s selection of members. The decision, reported first today by Vox, is largely due to outcry over the board’s inclusion of Heritage Foundation president Kay Coles James, a noted conservative figure who has openly espoused anti-LGBTQ rhetoric and, through the Heritage Foundation, fought efforts to extend rights to transgender individuals and to combat climate change. Read More

#ethics

Microsoft AI principles

Designing AI to be trustworthy requires creating solutions that reflect ethical principles that are deeply rooted in important and timeless values. Read More

#ethics