Edge Intelligence with Neuromorphic Computing: From Algorithms to Hardware Design

Date
Apr 27, 2023, 4:30 pm6:00 pm

Speaker

Details

Event Description

ECE  KORHAMMER  SEMINAR  SERIES

 Abstract:

Spiking Neural Networks (SNNs) have emerged as an alternative to deep learning especially for edge computing due to their huge energy efficiency benefits on neuromorphic hardware. In this presentation, I will discuss the roadmap of current activities in the algorithm and hardware design space, especially in reference to compute-in-memory accelerators.

 

In the first half, I will talk about important techniques for training SNNs which bring a huge benefit in terms of latency, accuracy and even robustness in different applications like video segmentation, human activity recognition and beyond traditional learning scenarios, like federated training and privacy preserving distributed learning. Then, I will discuss novel architectures with temporal feedback connections discovered by SNNs by using neural architecture search (NAS) that further lower latency and improve energy efficiency, and point to interesting temporal effects. I will also provide a brief overview on whether temporal characteristics in SNN bring any interesting behavior different from ANNs  with our recently proposed theoretical tools such as Center Kernel Alignment and Fischer Information Metrics. 

 

In the second half, I will delve into the hardware perspective of SNNs when implemented on standard CMOS and compute-in-memory (CiM) accelerators with our recently proposed SATA and SpikeSim tools. It turns out that the multiple timestep computation in SNNs can lead to extra memory overhead and repeated DRAM access that annuls all the compute-sparsity related advantages. I will highlight some techniques such as, early time-step exit that use the temporal dimension in SNNs to reduce the overhead. Finally, I will discuss an algorithm-hardware co-exploration or co-search framework for CiM hardware that optimizes the peripheral circuitry and the network topology together with a NAS-based optimization to yield best performance-energy efficiency tradeoffs. 

 

Bio: 

Priya Panda is an assistant professor in the electrical engineering department at Yale University, USA. She received her B.E. and Master's degree from BITS, Pilani, India in 2013 and her PhD from Purdue University, USA in 2019. During her PhD, she interned in Intel Labs where she developed large scale spiking neural network algorithms for benchmarking the Loihi chip. She is the recipient of the 2019 Amazon Research Award, 2022 Google Research Scholar Award, 2022 DARPA Riser Award, 2023 NSF CAREER Award. Her research interests lie in Neuromorphic Computing, Spiking Neural Networks, Energy-efficient Accelerators, and In-Memory Computing.

This seminar is supported by ECE Korhammer Lecture Series Funds

Sponsor
Electrical and Computer Engineering