Abstract |
Automatic target recognition (ATR) tasks which typically employ state-of-the-art deep learning inference techniques can be bottlenecked by energy expensive data movement operations in power-constrained edge devices. Consequently, computing paradigms such as in-memory-computing (IMC) are rapidly gaining traction in designing deep neural network (DNN) accelerators. Resistive random access memory (ReRAM), a technology which offers high storage density has emerged as a promising candidate for architecting IMC based DNN accelerators. In this work, we perform DNN inference using an ReRam based IMC architecture for enabling energy efficient ATR applications, on MSTAR dataset. Our proposed methodology outperforms commercial off-the-shelf general-purpose graphics processing units in terms of inference energy and throughput by ∼22× and ∼27× respectively. |