Presentation
Efficient, Scalable, Robust Neuromorphic High Performance Computing
DescriptionThe rapid advancement in Artificial Neural Networks (ANNs) has paved the way for Spiking Neural Networks (SNNs), which offer significant advantages in energy efficiency and computational speed, especially on neuromorphic hardware. My research focuses on the development of Efficient, Robust, and Scalable Heterogeneous Recurrent Spiking Neural Networks (HRSNNs) for high-performance computing, addressing key challenges in traditional digital systems, such as high energy consumption due to ADC/DAC conversions and vulnerability to process variations, temperature, and aging.
HRSNNs leverage the diversity in neuronal dynamics and Spike-Timing-Dependent Plasticity (STDP) to improve memory capacity, learn complex patterns, and enhance network performance. By incorporating unsupervised learning models and biologically plausible pruning techniques, we maintain network stability and computational efficiency. A notable contribution of this work is the introduction of Lyapunov Noise Pruning (LNP), which leverages temporal overparameterization to achieve significant reductions in network complexity without compromising accuracy.
Our approach also explores DNN-SNN hybrid models, which combine the strengths of deep neural networks and spiking networks for tasks such as object detection, demonstrating competitive accuracy with lower power consumption. Additionally, we propose a Processing-in-Memory (PIM) hardware platform for on-chip acceleration, further enhancing the scalability of our models.
This research represents a step towards scalable, energy-efficient, and robust SNNs, enabling their deployment in real-time, on-device learning, and inference, crucial for future AI applications in resource-constrained environments.
HRSNNs leverage the diversity in neuronal dynamics and Spike-Timing-Dependent Plasticity (STDP) to improve memory capacity, learn complex patterns, and enhance network performance. By incorporating unsupervised learning models and biologically plausible pruning techniques, we maintain network stability and computational efficiency. A notable contribution of this work is the introduction of Lyapunov Noise Pruning (LNP), which leverages temporal overparameterization to achieve significant reductions in network complexity without compromising accuracy.
Our approach also explores DNN-SNN hybrid models, which combine the strengths of deep neural networks and spiking networks for tasks such as object detection, demonstrating competitive accuracy with lower power consumption. Additionally, we propose a Processing-in-Memory (PIM) hardware platform for on-chip acceleration, further enhancing the scalability of our models.
This research represents a step towards scalable, energy-efficient, and robust SNNs, enabling their deployment in real-time, on-device learning, and inference, crucial for future AI applications in resource-constrained environments.