Understanding and Measuring Privacy Risks in Machine Learning

Thu, Sep 16, 2021, 2:00 pm
Location: 
zoom link: https://princeton.zoom.us/j/6441846264
Speaker(s): 

Machine learning models have achieved great success and have been deployed prominently in many real-world applications. However, the sensitive nature of individual users’ data has also raised privacy concerns against machine learning. A recent thread of research has shown that a malicious adversary can infer private information of users’ data by querying target machine learning models. In this thesis, we aim to thoroughly understand and measure privacy risks in machine learning, with a focus on membership inference attacks where the adversary guesses whether an input sample was used to train the target model or not. We first provide a systematic evaluation of membership inference privacy risks by designing benchmark attack algorithms to measure aggregate privacy risks and proposing a fine-grained analysis to estimate each individual sample’s privacy risk. Next, we analyze privacy risks in the context of trustworthy machine learning, where robust training algorithms are used to enhance model robustness against input perturbations. We demonstrate that robust training algorithms make machine learning models more vulnerable to membership inference attacks, highlighting the importance of jointly thinking about privacy and robustness. Finally, we extend the record-level membership inference to a user-level privacy analysis and focus on the framework of machine unlearning, where the machine learning service provider is required to remove a user’s data from trained models upon the user’s deletion request. By manipulating some of the training samples to inject backdoors, we propose a high-confidence verification mechanism which enables a user to verify whether the service provider follows its deletion request or not.