My talk will focus on the use of generative models to improve generalization in trustworthy machine learning. In the development of trustworthy machine learning systems, the majority of the focus has been on discriminative models, such as data classification models. Unlike generative models, discriminative models don't explicitly model data distribution, thus lacking the much-needed understanding of the data distribution. We harness the best of both worlds by using generative models to embed data-distribution information in discriminative models. In my talk, I will primarily demonstrate the success of this approach in defending neural networks against adversarial examples. In particular, we show that embedding an understanding of data-distribution information leads to state-of-the-art robust networks. I will further delve deeper into why generative models help, how to predict their success, and which generative models to choose. I will conclude the talk with broader implications of the integration of generative and discriminative models on various other aspects of trustworthy machine learning.
Zoom Meeting Link: https://princeton.zoom.us/j/6092166036