Energy-Efficient Implementation of Machine Learning Algorithms

Thu, Jun 10, 2021, 11:30 am
Location: 
meet.google.com/zke-eqtg-azw
Speaker(s): 

Pattern-recognition algorithms from the domain of machine learning play a prominent role in embedded sensing systems, in order to derive inferences from sensor data. Very often, such systems face severe energy constraints. The focus of this thesis is on mitigating the energy required for computation, communication, and storage by exploiting various forms of computation algorithms. In this work, we talk about transforming linear signal-processing computations by preserving a similarity metric widely used for pattern recognition. We utilize random projection to preserve inner product between source vectors. We show that random projections can be exploited for significant reduction in computational energy and avoiding a significant source of error. The approach is referred to as compressed signal processing (CSP). The second part of our work focuses on dealing with signal processing that may not be linear. Approximate computing is a broad approach that has recently received considerable attention in the context of inference systems. In this work, we explore the use of genetic programming (GP) to compute approximate features. Further, we leverage a method that enhances tolerance to feature-data noise through directed retraining of the inference stage. The third part of our work takes into consideration multi-task algorithms. By exploiting the concept of transfer learning and energy-efficient dataflow accelerators, we show that the use of convolutional autoencoders can enable various levels of reduction in computational energy and avoid a significant reduction in inference performance when multiple task categories are targeted for obtaining an inference.