Implicit adversarial data augmentation and robustness with Noise-based Learning

Abstract We introduce a Noise-based Learning (NoL) approach for training neural networks that are intrinsically robust to adversarial attacks. We find that the learning of random noise introduced with the input with the same loss function used during posterior maximization, improves a model’s adversarial resistance. We show that the learnt noise performs implicit adversarial data augmentation boosting a model’s adversary generalization capability. We evaluate our approach’s efficacy and provide a simplistic visualization tool for understanding adversarial data, using Principal Component Analysis. We conduct comprehensive experiments on prevailing benchmarks such as MNIST, CIFAR10, CIFAR100, Tiny ImageNet and show that our approach performs remarkably well against a wide range of attacks. Furthermore, combining NoL with state-of-the-art defense mechanisms, such as adversarial training, consistently outperforms prior techniques in both white-box and black-box attacks.
Authors
  • Priyadarshini Pandaa (Yale)
  • Kaushik Roy (Purdue)
Date Sep-2021
Venue Neural Networks, Volume 141, September 2021, Pages 120-132