DCT-SNN: Using DCT to Distribute Spatial Information over Time for Low-Latency Spiking Neural Networks

Abstract Spiking Neural Networks (SNNs) offer a promising alternative to traditional deep learning, since they provide higher computational efficiency due to event-driven information processing. SNNs distribute the analog values of pixel intensities into binary spikes over time. However, the most widely used input coding schemes, such as Poisson based rate-coding, do not leverage the additional temporal learning capability of SNNs effectively. Moreover, these SNNs suffer from high inference latency which is a major bottleneck to their deployment. To overcome this, we propose a time-based encoding scheme that utilizes Discrete Cosine Transform (DCT) to reduce the number of timesteps required for inference (DCT-SNN). DCT decomposes an image into a weighted sum of sinusoidal basis images. At each time step, a single frequency base, taken in order and modulated by its corresponding DCT coefficient, is input to an accumulator that generates spikes upon crossing a threshold. We use the proposed scheme to train DCT-SNN, a low-latency deep SNN with leaky-integrate-and-fire neurons using surrogate gradient descent based backpropagation. We achieve top-1 accuracy of 89.94%, 68.30% and 52.43% on CIFAR10, CIFAR-100 and TinyImageNet, respectively using VGG architectures. Notably, DCT-SNN performs inference with 2-14X reduced latency compared to other state-of-the-art SNNs, while achieving comparable accuracy to their standard deep learning counterparts. The dimension of the transform allows us to control the number of timesteps required for inference. Additionally, we can trade-off accuracy with latency in a principled manner by dropping the highest frequency components during inference. The code is publicly available.
Authors
  • Isha Garg (Purdue)
  • Sayeed Shafayet Chowdhury (Purdue)
  • Kaushik Roy (Purdue)
Date Oct-2021
Venue Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 4671-4680).