Machine learning designs approaches that transform data to predictions or estimations. The standard paradigm often assumes these data are objectively generated from distributions, without being affected by any human factors. However, this paradigm ceases to be true when our predictions or estimated parameters will in turn affect the data providers' welfare. In such situations, data providers have incentives to alter the data for their own benefits. Thus the design of any learning methods must account for potential data manipulations due to data providers' incentives.
This talk will introduce a general "incentive-aware" framework for designing prediction methods. I will illustrate this design paradigm with two examples: (1) a very recent and timely application of eliciting authors' truthful private information for improving the peer review systems for today's massive scale machine learning conferences; (2) a very classic problem of PAC-learning classifiers but with strategic providers of data features. In both problems, I will illustrate how the presence of incentives can fundamentally change the problem's learning efficiency and how algorithms can help to overcome some statistic barriers.
Haifeng Xu is an assistant professor in the Department of Computer Science and Data Science Institute at UChicago. He directs the Sigma Lab (Strategic IntelliGence for Machine Agents) which focuses on designing intelligent AI systems that can effectively learn and act in informationally complex multi-agent setups. His research has been recognized by a few awards including IJCAI early career spotlight, Google Faculty research award, the ACM SIGecom Dissertation Award and IFAAMAS Victor Lesser Distinguished Dissertation Award.
This seminar is supported with funds from the Korhammer Lecture Series.