
- Ph.D., Computational and Mathematical Engineering, Stanford University, 2015
- B.Sc., Mathematics, Duke University, 2010
Jason Lee received his Ph.D. at Stanford University, advised by Trevor Hastie and Jonathan Taylor, in 2015. Before joining Princeton, he was a postdoctoral scholar at UC Berkeley with Michael I. Jordan. His research interests are in machine learning, optimization, and statistics. Lately, he has worked on the foundations of deep learning, non-convex optimization, and reinforcement learning.
-
Jason D. Lee, Max Simchowitz, Michael I Jordan, and Benjamin Recht. Gradient Descent Converges to Minimizers. Conference on Learning Theory (COLT), 2016.
-
Rong Ge, Jason D. Lee, and Tengyu Ma. Matrix Completion has No Spurious Local Minimum. Neural Information Processing Systems (NIPS), 2016.
-
Simon S Du, Jason D Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient Descent Finds Global Minima of Deep Neural Networks. International Conference on Machine Learning (ICML), 2019.
-
Alekh Agarwal, Sham M Kakade, Jason D Lee, and Gaurav Mahajan. On the Theory of Policy Gradient Methods. Journal on Machine Learning Research (short version at COLT), 2020.
-
Simon S Du, Sham M Kakade, Jason D Lee, Shachar Lovett, Gaurav Mahajan, Wen Sun, and Ruosong Wang. Bilinear Classes: A Structural Framework for Provable Generalization in RL. International Conference on Machine Learning (ICML), 2021.
Honors and Awards:
- NSF Career Award
- ONR Young Investigator Award
- Sloan Research Fellowship in Computer Science
- NIPS Best Student Paper Award