Speaker
Details
Machine learning models are susceptible to a range of attacks that exploit data leakage from trained models for objectives such as training data reconstruction and membership inference. Differential Privacy (DP) is the gold standard for quantifying privacy risks and providing provable guarantees against attacks. Differentially Private Stochastic Gradient Descent (DP-SGD) is the standard privacy-preserving training algorithm for training neural networks on private data. However, DP-SGD introduces new challenges such as significant utility-drop, additional engineering effort and memory consumption for per-sample gradient computation, and requirement for access to model weights.
I will first discuss how to improve privacy-utility trade-off by leveraging public available information. Specifically, I will discuss how to leverage the priors without access to real-world images and how to maximize the benefits of such priors. Secondly, I will introduce a new methodology for DP fine-tuning of large pretrained models, differentially private zeroth-order optimization (DP-ZO). DP-ZO only requires to privatize the scalar information from the data and enjoys the benefits of flexibility with DP mechanisms, ease of implementation, and reduced computation. Finally, I will discuss how to protect the privacy of data for in-context learning when we only have access to APIs like GPT models instead of full model weights access. I will introduce a novel algorithm that generates synthetic few-shot demonstrations from the private dataset with DP guarantees, and show empirically that it can achieve effective ICL.
Adviser: Prateek Mittal
zoom https://princeton.zoom.us/j/9304660881