Speaker
Details
Abstract:
Machine learning's increasing integration raises significant privacy concerns. Current privacy auditing techniques for trained models are limited, primarily focusing on record-level privacy. In this talk, I will introduce the notion of distribution inference, an inference threat that seeks to uncover private properties of the underlying training distribution. I will talk about our efforts in formalizing this threat model and developing corresponding attacks. The talk will also cover the broader shortcomings in present privacy auditing methods and propose research ideas that attempt to overcome these issues.
Bio:
Anshuman Suri is a final year PhD candidate advised by David Evans at the University of Virginia. His research spans privacy and security in machine learning, with a particular focus on distribution inference. Anshuman has previously worked at Microsoft as an Applied Scientist and at Oracle Research as an intern. He is the recipient of the UVA John A. Stankovic Graduate Research Award and the UVA Endowed Graduate Fellowship. He has also served on the program committee of machine learning conferences like ICLR, ICML, NeurIPS, CVPR, and was recognized as an Outstanding Reviewer at ICLR and ICCV in 2021. Prior to UVA, he received his B.Tech (with Honors) from IIIT, Delhi in 2018.