Speaker
Details
Abstract
The rapid rise of IoT and Big Data has facilitated copious data driven applications to enhance our quality of life. However, the omnipresent and all-encompassing nature of data collection can generate privacy concerns. Hence, there is a strong need to develop techniques that ensure the data serve only the intended purposes. To this end, this thesis studies new variants of supervised and adversarial learning methods, which allow privacy enhancing processing to be applied to the data before they are sent out. Our purpose is to enable users to take control of the information within the data they share, and prevent their use for applications deemed undesirable, while gaining maximal utility from desirable data analysis.
We start with subspace projection techniques based on Linear Discriminant Analysis as our basic building blocks. We then introduce kernel based techniques and deep neural networks as we develop more general optimization objectives and feature maps. Finally, we propose methods to optimize privacy enhancing feature maps and predictive models simultaneously in an end-to-end fashion. As we gradually advance towards more sophisticated tools, we also present kernel approximation methods and cheaper neural network embeddings to limit the computational burden on the users, who would like to desensitize their data.