Today’s computation workloads, and the unprecedented capabilities they are enabling, are driving systems to the limits of what can be achieved with conventional technologies. We must start to consider new technologies, that align to the data-centric nature of emerging workloads, to break through these limits. But how will we know if these new technologies will meet the complex multitude of needs in full systems (programmability, scalability, reliability), while preserving the efficiency and performance?
In this talk, I will present my research on designing complete computing systems based on unconventional computing fabrics to break through the efficiency and performance limits of conventional technologies, while preserving the practical requirements of programmability and scalability. This work is characterized by cross-layer co-design and experimental validation via custom CMOS integrated circuits. I will first introduce how approximate computing, an opportunity that arises in many data-centric applications, can address not only efficiency and performance, but also open up a new design space for enhancing the tradeoff between programmability and specialization. Next, I will discuss how aggressive mixed-signal computation for in-memory computing (IMC), which has the potential to overcome the well-known memory-wall in traditional computing architectures, can be integrated at the architectural and software levels. This work constructs a robust abstraction of mixed-signal IMC operation, to demonstrate complete programmable architectures. Going beyond architectures, I will present our work on application-mapping co-design, focusing on scalable execution, through algorithms derived from the circuit-level hardware energy and density tradeoffs of the computing fabric.
Designing Computing Systems Based on Unconventional Technologies for Hardware Acceleration
Thu, Jun 17, 2021, 3:30 pm