Robustness Hidden in Plain Sight: Can Analog Computing Defend Against Adversarial Attacks?

Abstract The ever-increasing computational demand of Deep Learning has propelled re- search in special-purpose inference accelerators based on emerging non-volatile memory (NVM) technologies. Such NVM crossbars promise fast and energy- efficient in-situ matrix vector multiplications (MVM) thus alleviating the long- standing von Neuman bottleneck in today's digital hardware. However the analog nature of computing in these NVM crossbars introduces approximations in the MVM operations. In this paper, we study the impact of these non-idealities on the performance of DNNs under adversarial attacks. The non-ideal behavior interferes with the computation of the exact gradient of the model, which is required for adversarial image generation. In a non-adaptive attack, where the attacker is unaware of the analog hardware, we show that analog computing offers a varying degree of intrinsic robustness, with a peak adversarial accuracy improvement of 35.34%, 22.69%, and 31.70% for white box PGD (ε=1/255, iter=30) for CIFAR-10, CIFAR-100, and ImageNet(top-5) respectively. We also demonstrate "hardware-in- loop" adaptive attacks that circumvent this robustness by utilizing the knowledge of the NVM model. To our knowledge, this is the first work that explores the non-idealities of analog computing for adversarial robustness.
Authors
  • Deboleena Roy (Purdue)
  • Indranil Chakraborty (Purdue)
  • Timur Ibrayev (Purdue)
  • Kaushik Roy (Purdue)
Date Aug-2020
Venue arxiv pre-print [link]