Artificial intelligence in Australia needs to get ethical, so we have a plan

The question of whether technology is good or bad depends on how it’s developed and used. Nowhere is that more topical than in technologies using artificial intelligence.

When developed and used appropriately, artificial intelligence (AI) has the potential to transform the way we live, work, communicate and travel.

New AI-enabled medical technologies are being developed to improve patient care. There are persuasive indications that autonomous vehicles will improve safety and reduce the road toll. Machine learning and automation are streamlining workflows and allowing us to work smarter.

Around the world, AI-enabled technology is increasingly being adopted by individuals, governments, organisations and institutions. But along with the vast potential to improve our quality of life, comes a risk to our basic human rights and freedoms.

Appropriate oversight, guidance and understanding of the way AI is used and developed in Australia must be prioritised. Read More

#ethics

The Problem with AI Ethics

Last week, Google announced that it is creating a new external ethics board to guide its “responsible development of AI.” On the face of it, this seemed like an admirable move, but the company was hit with immediate criticism.

Researchers from Google, Microsoft, Facebook, and top universities objected to the board’s inclusion of Kay Coles James, the president of right-wing think tank The Heritage Foundation. They pointed out that James and her organization campaign against anti-discrimination laws for LGBTQ groups and sponsor climate change denial, making her unfit to offer ethical advice to the world’s most powerful AI company. An open petition demanding James’ removal was launched (it currently has more than 1,700 signatures), and as part of the backlash, one member of the newly formed board resigned

.Google has yet to say anything about all of this (it didn’t respond to multiple requests for comment from The Verge), but to many in the AI community, it’s a clear example of Big Tech’s inability to deal honestly and openly with the ethics of its work. Read More

#ethics

Algorithms have gotten out of control. It's time to regulate them.

McDonald’s announced recently that it purchased Dynamic Yield, an AI company it will use to analyze customer habits to try and sell them more food. When a hamburger shack is using algorithms to stoke sales, it’s clear we have entered a new era. But the ubiquity of algorithms is not merely an evolution of technology. Rather, it represents the emergence of a whole new set of questions around ethics, bias, and equity with which we must grapple. Up until now, algorithms have been deployed with relatively little oversight. It may be time for that to change.

Algorithms — complex equations that are used to make decisions — are becoming fundamental to the functioning of modern society. But they also bring with them a heap of problems. For example, a revealing Bloomberg piece recently described how YouTube has a long history of suppressing employee concerns about false or bigoted content on the platform in favor of the AI-based content sorting system that determines which videos the site recommends to users. That’s a problem! Read More

#ethics

Europe’s silver bullet in global AI battle: Ethics

As American and Chinese companies dominate the AI battlefield, the EU has pinned its hopes on becoming a world leader in what it calls “trustworthy” artificial intelligence. Read More

#china-vs-us, #ethics