Applications of machine learning have become increasingly common in recent years. For instance, navigation systems like Google Maps use machine learning to better predict traffic patterns; Facebook, LinkedIn, and other social media platforms use machine learning to customize user's news feeds. Central to all these systems is user data. However, the sensitive nature of the collected data has also led to a number of privacy concerns. Privacy-preserving machine learning enables systems that can perform such computation over sensitive data while protecting its privacy.
In this dissertation, we focus on developing efficient protocols for machine learning as a target analytics application. To incorporate privacy, we use a multi-party computation-based approach, where a number of non-colluding entities jointly perform computation over the data and privacy stems from no party having any information about the data being computed on. At the heart of this dissertation are three frameworks -- SecureNN, Falcon, and Ponytail -- each pushing the frontiers of privacy-preserving machine learning and propose novel approaches to protocol design. Each framework provides both significant asymptotic as well as concrete efficiency gains over prior work by improving computation as well as communication performance by orders of magnitude.
The building blocks -- matrix multiplication, rectified linear unit, maxpool, batch-normalization -- are central to machine learning and improvements to these significantly improve upon prior art in private machine learning. Furthermore, each of these systems is implemented and benchmarked to reduce the barrier of deployment. Uniquely positioned at the intersection of both theory and practice, these frameworks bridge the gap between plaintext and privacy-preserving computation while contributing new directions for research to the community.