Recent studies have highlighted adversarial examples as a ubiquitous threat to dif-ferent neural network models and many downstream applications. Nonetheless,as unique data properties have inspired distinct and powerful learning principles,this paper aims to explore their potentials towards mitigating adversarial inputs.In particular, our results reveal the importance of using the temporal dependencyin audio data to gain discriminate power against adversarial examples. Tested onthe automatic speech recognition (ASR) tasks and three recent audio adversarialattacks, we find that (i) input transformation developed from image adversarial de-fense provides limited robustness improvement and is subtle to advanced attacks;(ii) temporal dependency can be exploited to gain discriminative power againstaudio adversarial examples and is resistant to adaptive attacks considered in ourexperiments. Our results not only show promising means of improving the robust-ness of ASR systems, but also offer novel insights in exploiting domain-specificdata properties to mitigate negative effects of adversarial examples. Read More