Who Should Stop Unethical A.I.?

At artificial-intelligence conferences, researchers are increasingly alarmed by what they see.

In computer science, the main outlets for peer-reviewed research are not journals but conferences, where accepted papers are presented in the form of talks or posters. In June, 2019, at a large artificial-intelligence conference in Long Beach, California, called Computer Vision and Pattern Recognition, I stopped to look at a poster for a project called Speech2Face. Using machine learning, researchers had developed an algorithm that generated images of faces from recordings of speech. A neat idea, I thought, but one with unimpressive results: at best, the faces matched the speakers’ sex, age, and ethnicity—attributes that a casual listener might guess. That December, I saw a similar poster at another large A.I. conference, Neural Information Processing Systems (NeurIPS), in Vancouver, Canada. I didn’t pay it much mind, either. Read More

#ethics

Ai-Da, the first robot artist to exhibit herself

Ai-Da , a humanoid artificial intelligence robot, will exhibit a series of self-portraits that she created by “looking” into a mirror integrated with her camera eyes. Read More

Read More
#image-recognition, #robotics, #videos

On the Dangers of Stochastic Parrots:Can Language Models Be Too Big?

The past 3 years of work in NLP have been characterized by the development and deployment of ever larger language models, especially for English. BERT, its variants, GPT-2/3, and others, most recently Switch-C, have pushed the boundaries of the possible both through architectural innovations and through sheer size. Using these pretrained models and the methodology of fine-tuning them for specific tasks, researchers have extended the state of the art on a wide array of tasks as measured by leaderboards on specific benchmarks for English. In this paper, we take a step back and ask: How big is too big? What are the possible risks associated with this technology and what paths are available for mitigating those risks? We provide recommendations including weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values, and encouraging research directions beyond ever larger language models. Read More

#nlp

Why Some Models Leak Data

Machine learning models use large amounts of data, some of which can be sensitive. If they’re not trained correctly, sometimes that data is inadvertently revealed.

… Models of real world data are often quite complex—this can improve accuracy, but makes them more susceptible to unexpectedly leaking information. Medical models have inadvertently revealed patients’ genetic markers. Language models have memorized credit card numbers. Faces can even be reconstructed from image models.

… Training models with differential privacy stops the training data from leaking by limiting how much the model can learn from any one data point. Differentially private models are still at the cutting edge of research, but they’re being packaged into machine learning frameworks, making them much easier to use. When it isn’t possible to train differentially private models, there are also tools that can measure how much data is the model memorizing.  Read More

#adversarial, #privacy, #model-attacks

Chinese Technology Platforms Operating in the United States: Assessing the Threat

… Going forward, the US government has an urgent need for smart policies and practices to respond to China’s
growing tech sector and the spread of China-controlled platforms. The Biden administration will have to decide what to do about TikTok and WeChat. It also will need to develop a broader US strategy for addressing the range of security risks (e.g., economic, national security, cybersecurity) and threats to civil liberties posed by the spread of China-developed and -controlled technologies.

This report seeks to contribute to these efforts by suggesting a comprehensive framework for understanding and assessing the risks posed by Chinese technology platforms in the United States. It is the product of a working group convened by the Tech, Law & Security Program at American University Washington College of Law and the National Security, Technology, and Law Working Group at the Hoover Institution at Stanford University. Read More

#china-vs-us

How the Kremlin Uses Agenda Setting to Paint Democracy in Panic

Since November 2020, the world has watched the presidential transition in the United States with unease. After a violent mob of Trump supporters stormed the U.S. Capitol on Jan. 6 in an effort to overturn Joe Biden’s election, headlines around the world questioned, for the first time, whether a democratic transfer of power would occur as expected. These reports also included the well-documented risks of violence that might occur at President Biden’s inauguration. 

…But Russian media tell a different story. By flooding the front pages of its media with headlines of continued unrest, opposition criticism and government suppression, the Kremlin has pulled out an old playbook in its efforts to sway global opinion against the promise of Western liberalism. And these tactics, compared to the shadowy bots and trolls we’ve grown to associate with Russian influence operations, may prove even tougher to counter. Read More

#russia

Using Machine Learning to Fill Gaps in Chinese AI Market Data

In this proof-of-concept project, CSET and Amplyfi Ltd. used machine learning models and Chinese-language web data to identify Chinese companies active in artificial intelligence. Most of these companies were not labeled or described as AI-related in two high-quality commercial datasets. The authors’ findings show that using structured data alone—even from the best providers—will yield an incomplete picture of the Chinese AI landscape. Read More

#china-ai

Fetching AI Data: Researchers Get Leg Up on Teaching Dogs New Tricks with NVIDIA Jetson

AI is going to the dogs. Literally.

Colorado State University researchers Jason Stock and Tom Cavey have published a paper on an AI system to recognize and reward dogs for responding to commands.

The graduate students in computer science trained image classification networks to determine whether a dog is sitting, standing or lying. If a dog responds to a command by adopting the correct posture, the machine dispenses it a treat. Read More

#image-recognition, #nvidia

AffectiveSpotlight: Facilitating the Communication of Affective Responses from Audience Members during Online Presentations

The ability to monitor audience reactions is critical when delivering presentations. However, current videoconferencing platforms offer limited solutions to support this. This work leverages recent advances in affect sensing to capture and facilitate communication of relevant audience signals. Using an exploratory survey (N=175), we assessed the most relevant audience responses such as confusion,engagement, and head-nods. We then implemented AffectiveSpotlight, a Microsoft Teams bot that analyzes facial responses and head gestures of audience members and dynamically spotlights the most expressive ones. In a within-subjects study with 14 groups (N=117),we observed that the system made presenters significantly more aware of their audience, speak for a longer period of time, and self-assess the quality of their talk more similarly to the audience members, compared to two control conditions (randomly-selected spotlight and default platform UI). We provide design recommendations for future affective interfaces for online presentations based on feedback from the study. Read More

#image-recognition, #surveillance

Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions

Understanding the actions of both humans and artificial intelligence (AI) agents is important before modern AI systems can be fully integrated into our daily life. In this paper, we show that, despite their current huge success, deep learning based AI systems can be easily fooled by subtle adversarial noise to misinterpret the intention of an action in interaction scenarios. Based on a case study of skeleton-based human interactions, we propose a novel adversarial attack on interactions, and demonstrate how DNN-based interaction models can be tricked to predict the participants’ reactions in unexpected ways. From a broader perspective, the scope of our proposed attack method is not confined to problems related to skeleton data but can also be extended to any type of problems involving sequential regressions. Our study highlights potential risks in the interaction loop with AI and humans, which need to be carefully addressed when deploying AI systems in safety-critical applications. Read More

#adversarial