Deep Neural Networks (DNNs) have been widely applied invarious recognition tasks. However, recently DNNs have been shown tobe vulnerable against adversarial examples, which can mislead DNNs tomake arbitrary incorrect predictions. While adversarial examples are wellstudied in classification tasks, other learning problems may have differ-ent properties. For instance, semantic segmentation requires additionalcomponents such as dilated convolutions and multiscale processing. Inthis paper, we aim to characterize adversarial examples based on spatialcontext information in semantic segmentation. We observe that spatialconsistency information can be potentially leveraged to detect adversar-ial examples robustly even when a strong adaptive attacker has accessto the model and detection strategies. We also show that adversarialexamples based on attacks considered within the paper barely transferamong models, even though transferability is common in classification.Our observations shed new light on developing adversarial attacks anddefenses to better understand the vulnerabilities of DNNs. Read More