||Spiking Neural Networks (SNNs) have recently attracted significant research interest as the third generation of artificial neural networks that can enable low-power event- driven data analytics. The best performing SNNs for image recognition tasks are obtained by converting a trained Ana- log Neural Network (ANN), consisting of Rectified Linear Units (ReLU), to SNN composed of integrate-and-fire neu- rons with “proper” firing thresholds. The converted SNNs typically incur loss in accuracy compared to that provided by the original ANN and require sizable number of inference time-steps to achieve the best accuracy. We find that performance degradation in the converted SNN stems from using “hard reset” spiking neuron that is driven to fixed reset potential once its membrane potential exceeds the fir- ing threshold, leading to information loss during SNN inference. We propose ANN-SNN conversion using “soft re- set” spiking neuron model, referred to as Residual Membrane Potential (RMP) spiking neuron, which retains the “residual” membrane potential above threshold at the fir- ing instants. We demonstrate near loss-less ANN-SNN con- version using RMP neurons for VGG-16, ResNet-20, and ResNet-34 SNNs on challenging datasets including CIFAR- 10 (93.63% top-1), CIFAR-100 (70.93% top-1), and Ima- geNet (73.09% top-1 accuracy). Our results also show that RMP-SNN surpasses the best inference accuracy provided by the converted SNN with “hard reset” spiking neurons using 2-8× fewer inference time-steps across network architectures and datasets.