Original paper
Towards Deep Learning Models Resistant to Adversarial Attacks.
Abstract
Recent work has demonstrated that deep neural networks are vulnerable to adversarial examples—inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust...
Paper Details
Title
Towards Deep Learning Models Resistant to Adversarial Attacks.
Published Date
Feb 15, 2018
Journal