Deep Neural Networks are highly expressive machine learning networks. Researchers have found that it is far too easy to fool them with an imperceivable, but carefully constructed nudge in the input. Adversarial training looks to defend against attacks by pretending to be the attacker, generating a number of adversarial examples against your own network, and then explicitly training the model to not be fooled by them. Defensive distillation looks to train a secondary model whose surface is smoothed in the directions an attacker will typically try to exploit, making it difficult for them to discover adversarial input tweaks that lead to incorrect categorization. Read More
Daily Archives: October 12, 2019
Facebook’s Captum brings explainability to machine learning
Facebook today introduced Captum, a library for explaining decisions made by neural networks with deep learning framework PyTorch. Captum is designed to implement state of the art versions of AI models like Integrated Gradients, DeepLIFT, and Conductance. Captum allows researchers and developers to interpret decisions made in multimodal environments that combine, for example, text, images, and video, and allows them to compare results to existing models within the library. Read More