BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20250626T233632Z
LOCATION:B302-B305
DTSTART;TZID=America/New_York:20241119T120000
DTEND;TZID=America/New_York:20241119T170000
UID:submissions.supercomputing.org_SC24_sess487_drs119@linklings.com
SUMMARY:Efficient, Scalable, Robust Neuromorphic High Performance Computin
 g
DESCRIPTION:Biswadeep Chakraborty (Georgia Institute of Technology)\n\nThe
  rapid advancement in Artificial Neural Networks (ANNs) has paved the way 
 for Spiking Neural Networks (SNNs), which offer significant advantages in 
 energy efficiency and computational speed, especially on neuromorphic hard
 ware. My research focuses on the development of Efficient, Robust, and Sca
 lable Heterogeneous Recurrent Spiking Neural Networks (HRSNNs) for high-pe
 rformance computing, addressing key challenges in traditional digital syst
 ems, such as high energy consumption due to ADC/DAC conversions and vulner
 ability to process variations, temperature, and aging.\n\nHRSNNs leverage 
 the diversity in neuronal dynamics and Spike-Timing-Dependent Plasticity (
 STDP) to improve memory capacity, learn complex patterns, and enhance netw
 ork performance. By incorporating unsupervised learning models and biologi
 cally plausible pruning techniques, we maintain network stability and comp
 utational efficiency. A notable contribution of this work is the introduct
 ion of Lyapunov Noise Pruning (LNP), which leverages temporal overparamete
 rization to achieve significant reductions in network complexity without c
 ompromising accuracy.\n\nOur approach also explores DNN-SNN hybrid models,
  which combine the strengths of deep neural networks and spiking networks 
 for tasks such as object detection, demonstrating competitive accuracy wit
 h lower power consumption. Additionally, we propose a Processing-in-Memory
  (PIM) hardware platform for on-chip acceleration, further enhancing the s
 calability of our models.\n\nThis research represents a step towards scala
 ble, energy-efficient, and robust SNNs, enabling their deployment in real-
 time, on-device learning, and inference, crucial for future AI application
 s in resource-constrained environments.\n\nRegistration Category: Tech Pro
 gram Reg Pass, Exhibits Reg Pass\n\nSession Chairs: Ayesha Afzal (Friedric
 h-Alexander University, Erlangen-Nuremberg; Erlangen National High Perform
 ance Computing Center); Sally Ellingson (University of Kentucky); and Alan
  Sussman (University of Maryland)\n\n
END:VEVENT
END:VCALENDAR
