Event Date
Professor Weiyu Xu, University of Iowa
Abstract
In this talk, we study the adversarial robustness of deep neural networks for classification tasks. We look at the smallest magnitude of possible additive perturbations that can change the output of a classification algorithm. We provide a matrix-theoretic explanation of the adversarial fragility of deep neural networks for classification. In particular, our theoretical results show that a neural network's adversarial robustness can degrade as the input dimension d increases. Analytically we show that neural networks' adversarial robustness can be only (1 over the square root of d) fraction of the best possible adversarial robustness. Our matrix-theoretic explanation is consistent with an earlier information-theoretic feature-compression-based explanation for the adversarial fragility of neural networks.
Bio
Weiyu Xu is a professor in the Department of Electrical and Computer Engineering at the University of Iowa. He received his Ph.D. degree from California Institute of Technology in 2009, M.E. degree from Tsinghua University in 2005, and B.E. degree from Beijing University of Posts and Telecommunications in 2002. His research interests are signal processing, optimization, high-dimensional data analytics, and mathematical understanding of machine learning algorithms.