The rapid development of artificial intelligence (AI) is raising urgent questions about ethical and consumer protection issues — from potential bias in algorithmic recruiting decisions to the privacy implications of health monitoring applications.
This survey finds that policymakers have a clear vision of AI ethical risks — and are moving to implementation, while, in contrast, a much weaker consensus exists among companies. Read More
Tag Archives: Trust
This Technique Uses AI to Fool Other AIs
Artificial intelligence has made big strides recently in understanding language, but it can still suffer from an alarming, and potentially dangerous, kind of algorithmic myopia.
Research shows how AI programs that parse and analyze text can be confused and deceived by carefully crafted phrases. A sentence that seems straightforward to you or me may have a strange ability to deceive an AI algorithm. Read More
The Four Components of Trusted Artificial Intelligence
Trust and transparency are at the forefront of conversations related to artificial intelligence(AI) these days. While we intuitively understand the idea of trusting AI agents, we are still trying to figure out the specific mechanics to translate trust and transparency into programmatic constructs. After all, what does trust means in the context of an AI system? Read More
DeepCode taps AI for code reviews
By leveraging artificial intelligence to help clean up code, DeepCode aims to become to programming what writing assistant Grammarly is to written communications.
Likened to a spell checker for developers, DeepCode’s cloud service reviews code and provides alerts about critical vulnerabilities, with the intent of stopping security bugs from making it into production. The goal is to enable safer, cleaner code and deliver it faster. Read More
Trust, control and personalization through human-centric AI
Our virtual lives lie in the hands of algorithms that govern what we see and don’t see, how we perceive the world and which life choices we make. Artificial intelligence decides which movies are of interest to you, how your social media feeds should look like, and which advertisements have the highest likelihood of convincing you. These algorithms are either controlled by corporations or by governments, each of which tend to have goals that differ from the individual’s objectives.
In this article, we dive into the world of human-centric AI, leading to a new era where the individual not only controls the data, but also steers the algorithms to ensure fairness, privacy and trust. Breaking free from filter bubbles and detrimental echo chambers that skew the individual’s worldview allows the user to truly benefit from today’s AI revolution.
While the devil is in the implementation and many open questions still remain, the main purpose of this think piece is to spark a discussion and lay out a vision of how AI can be employed in a human-centric way. Read More
The devil you know: trust in military applications of Artificial Intelligence
This article was submitted in response to the call for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It is based on a chapter by the authors in the forthcoming book ‘AI at War’ and addresses the fifth question (part d.) which asks what measures the government should take to ensure AI systems for national security are trusted — by the public, end users, strategic decision-makers, and/or allies. Read More