Implicit Generative Modeling of Random Noise during Training improves Adversarial Robustness

Abstract We introduce a Noise-based prior Learning (NoL) approach for training neural networks that are intrinsically robust to adversarial attacks. We find that the implicit generative modeling of random noise with the same loss function used during posterior maximization, improves a model's understanding of the data manifold furthering adversarial robustness. We evaluate our approach's efficacy and provide a simplistic visualization tool for understanding adversarial data, using Principal Component Analysis. Our analysis reveals that adversarial robustness, in general, manifests in models with higher variance along the high-ranked principal components. We show that models learnt with our approach perform remarkably well against a wide-range of attacks. Furthermore, combining NoL with stateof-the-art adversarial training extends the robustness of a model, even beyond what it is adversarially trained for, in both white-box and black-box attacks. Source Code available at panda1230/Adversarial NoiseLearning NoL.
Authors
  • Priyadarshini Panda (Purdue)
  • Kaushik Roy (Purdue)
Date Jun-2019
Venue International Conference on Machine Learning 2019