The Defense Innovation Board—an advisory committee of tech executives, scholars, and technologists—has unveiled its list of ethical principles for artificial intelligence (AI). If adopted by the Defense Department, then the recommendations will help shape the Pentagon’s use of AI in both combat and non-combat systems. The board’s principles are an important milestone that should be celebrated, but the real challenge of adoption and implementation is just beginning. For the principles to have an impact, the department will need strong leadership from the Joint AI Center (JAIC), buy-in from senior military leadership and outside groups, and additional technical expertise within the Defense Department. Read More
Tag Archives: Ethics
Why fair artificial intelligence might need bias
Businesses across industries are racing to integrate artificial intelligence (AI). Use cases are proliferating, from detecting fraud, increasing sales, improving customer experience, automating routine tasks, to providing predictive analytics.
With machine learning models relying on algorithms learning patterns from vast pools of data, however, models are at risk of perpetuating bias present in the information they are fed. In this sense, AI’s mimicking of real-world, human decisions is both a strength and a great weakness for the technology— it’s only as ‘good’ as the information it accesses. Read More
In the matter of automated data processing in government decision making
The Legal Education Foundation (“TLEF”) seeks to identify new and emerging areas of law where there are gaps in the legal analysis and the need for increased understanding.
… Ultimately, we conclude that there is a very real possibility that the current use of governmental automated decision-making is breaching the existing equality lawframework in the UK, and is “hidden” from sight due to the way in which the technology is being deployed.
Notwithstanding these conclusions, we should emphasise that we fully understand the benefits of automated decision-making. In no way should our Opinion be interpreted as endorsing a blanket resistance to technology that has the potential to increase the speed and perhaps accuracy of important government functions. Rather, our Opinion should be read as a caution against the uncritical acceptance and endorsement of automated decision-making because of its potential to cause damaging unlawful discrimination. Read More
How Machine Learning Pushes Us to Define Fairness
Bias is machine learning’s original sin. It’s embedded in machine learning’s essence: the system learns from data, and thus is prone to picking up the human biases that the data represents. For example, an ML hiring system trained on existing American employment is likely to “learn” that being a woman correlates poorly with being a CEO.
Cleaning the data so thoroughly that the system will discover no hidden, pernicious correlations can be extraordinarily difficult. Even with the greatest of care, an ML system might find biased patterns so subtle and complex that they hide from the best-intentioned human attention. Hence the necessary current focus among computer scientists, policy makers, and anyone concerned with social justice on how to keep bias out of AI.
Yet machine learning’s very nature may also be bringing us to think about fairness in new and productive ways. Read More
From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices
The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Samuel, 1960; Wiener, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the ‘what’ of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)—rather than on practices, the ‘how.’ Awareness of the potential issues is increasing at a fast rate, but the AI community’s ability to take action to mitigate the associated risks is still at its infancy. Our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers apply ethics at each stage of the Machine Learning development pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs. Read More
The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions
The last few years have seen a proliferation of principles for AI ethics. There is substantial overlap between different sets of principles, with widespread agreement that AI should be used for the common good, should not be used to harm people or undermine their rights, and should respect widely held values such as fairness, privacy, and autonomy. While articulating and agreeing on principles is important, it is only a starting point. Drawing on comparisons with the field of bioethics, we highlight some of the limitations of principles: in particular, they are often too broad and high-level to guide ethics in practice. We suggest that an important next step for the field of AI ethics is to focus on exploring the tensions that inevitably arise as we try to implement principles in practice. By explicitly recognising these tensions we can begin to make decisions about how they should be resolved in specific cases, and develop frameworks and guidelines for AI ethics that are rigorous and practically relevant. We discuss some different specific ways that tensions arise in AI ethics, and what processes might be needed to resolve them. Read More
Sidewalk Labs, Waterfront Toronto to proceed with Quayside project, but with significant changes
Sidewalk Labs’ controversial proposal to build a high-tech district on Toronto’s waterfront is moving forward but major changes will be incorporated by Waterfront Toronto as it moves to assert more control over the project.
The development has been criticized by Ontario’s premier, privacy advocates and those suspicious of Big Tech.
In a significant climb down, Google sister firm Sidewalk Labs has agreed to a “realignment” of its original master plan, one that had called for broad development in Toronto’s Port Lands area and a public commitment from Waterfront Toronto to secure funding and deliver the extension of Light Rail Transit on the eastern waterfront. Read More
‘The data is my master.’
Tech triangles and AI ethics: Danit Gal on Chinese AI
Danit Gal is a former Yenching Scholar and coauthor of a recent paper, “Perspectives and Approaches to AI Ethics: East Asia.” On this episode, Gal discusses how Japanese, South Korean, and Chinese experts are forging new paths in the field of artificial intelligence (AI), exploring societal applications — and the unexpected drawbacks of “female” virtual assistants. Gal also explains the tech connections between China and Israel, and the possible impact of the U.S.-China trade war on this relationship. Read More
Perspectives and Approaches in AI Ethics: East Asia
This chapter introduces readers to distinct Chinese, Japanese, and South Korean perspectives on and approaches to AI and robots as tools and partners in the AI ethics debate. Little discussed and often ignored, this sensitive topic commands our attention as it continues to grow in local importance. Given East Asia’s influential position as a source of global inspiration, development, and supply of AI and robotics, we would do well to inform ourselves of what’s to come. Each country’s perspectives on and approaches to AI and robots on the tool-partner spectrum are evaluated by examining its policy, academic thought, local practices, and popular culture. This analysis places South Korea in the tool range, China in the middle of the spectrum, and Japan in the partner range. All three countries hold a salient tension between top-down tool approaches and bottom-up partner perspectives. This tension is likely to increase both in magnitude and importance and shape local and global development and regulation trajectories in the years to come. Read More
The Hitchhiker’s Guide to AI Ethics
Don’t Panic!
“The Hitchhiker’s Guide to AI Ethics is a must read for anyone interested in the ethics of AI. The book is written in the style and spirit that has inspired many sci fi authors. The author’s goal was not only for a short, yet entertaining, but for an entire series.”
Sounds about right! Any guesses who wrote this raving review?
A machine learning algorithm. Read More —– (Part 2) (Part 3)