Artificial intelligence (AI) is at the cutting edge of innovation. But how do companies find the expertise necessary to utilize it, and then take it to market? In this video, recorded at the Aspen Ideas Festival in June, Andrew Ng, cofounder of Coursera, AI Fund, and Landing.AI, discusses the difference between an AI-enabled business versus a true AI company, and how businesses can organize, hire, and make use of AI to add value. Read More
Daily Archives: December 2, 2019
Chinese tech groups shaping UN facial recognition standards
Chinese technology companies are shaping new facial recognition and surveillance standards at the UN, according to leaked documents obtained by the Financial Times, as they try to open up new markets in the developing world for their cutting-edge technologies.
Companies such as ZTE, Dahua and China Telecom are among those proposing new international standards — specifications aimed at creating universally consistent technology — in the UN’s International Telecommunication Union (ITU) for facial recognition, video monitoring, city and vehicle surveillance. Read More
TrojDRL: Trojan Attacks on Deep Reinforcement Learning Agents
Recent work has identified that classification models implemented as neural networks are vulnerable to data-poisoning and Trojan attacks at training time. In this work, we show that these training-time vulnerabilities extend to deep reinforcement learning (DRL) agents and can be exploited by an adversary with access to the training process. In particular, we focus on Trojan attacks that augment the function of reinforcement learning policies with hidden behaviors. We demonstrate that such attacks can be implemented through minuscule data poisoning (as little as 0.025% of the training data) and in-band reward modification that does not affect the reward on normal inputs. The policies learned with our proposed attack approach perform imperceptibly similar to be-nign policies but deteriorate drastically when the Trojan is triggered in both targeted and untargeted settings. Furthermore, we show that existing Trojan defense mechanisms for classification tasks are not effective in the reinforcement learning setting. Read More
#adversarial