Pre-FPO "Training Deep Neural Networks with In-Memory Computing"

Date
May 10, 2022, 12:30 pm2:00 pm
Location
Zoom Meeting See Abstract for Link
Event Description

Computer systems are increasingly capable of performing complex tasks such as image classification and natural language processing thanks in large part to advances in deep neural networks (DNNs). This improvement in performance is largely tied to an increase in the number of trainable parameters which require a large number of training operations. These training operations are dominated by high dimensionality matrix-vector multiplications (MVMs). In-memory computing (IMC), a computing approach where computations are performed in-place within dense 2D memory, increasingly has demonstrated efficiency and throughput gains for pre-trained inference DNNs. This seminar explores the applicability of IMC to training DNNs.


IMC fundamentally trades efficiency and throughput gains for dynamic-range limitations, raising distinct challenges for training, where compute precision requirements are seen to be substantially higher than for inferencing. For IMC, compute precision is defined by input and output quantization along with analog noise effects. We present methods for improving IMC compute precision for training. Key to these methods are robust input encoding schemes that take advantage of natural gradient distributions and dynamic output quantization which makes efficient use of the output dynamic range. First, we demonstrate a radix-4 one-hot encoding scheme with variable output-range which greatly reduces quantization effects by increasing gradient sparsity. Next, we implement a radix-4 analog input approach which reduces output-range switching requirements and provides robustness to analog noise. These methods demonstrate the benefits of IMC for training DNNs including over 400x energy savings over GPU based operations.

Zoom Meeting https://princeton.zoom.us/j/97813172321

Speaker