New Results on Universal Dynamic Regret Minimization for Learning and Control

Date
Sep 12, 2022, 4:30 pm6:00 pm
Location
B205 Engineering Quad
Event Description

CSML & ECE  KORHAMMER  SEMINAR  SERIES

Abstract: 

Universal dynamic regret is a natural metric for the performance of an online learner in nonstationary environments.  The optimal dynamic regret for strongly convex and exponential concave losses, however, had been open for nearly two decades. In this talk, I will cover some recent advances on this problem from my group that largely settled this open problem. We will see that the optimal regret is n^{1/3} TotalVariation( u_{1:n}^{2/3}) up to log-factors and it can be achieved by a novel reduction to adaptive regret. Interestingly, the result isn’t known even for the offline and stochastic setting except in more specialized problems. I will also cover various extensions and applications of the results including, e.g. to non-stochastic LQR control.

  

YWAng
Bio:
Yu-Xiang Wang is the Eugene Aas Assistant Professor of Computer Science at UCSB. He runs the Statistical Machine Learning lab and co-founded the UCSB Center for Responsible Machine Learning. Yu-Xiang received his PhD in Statistics and Machine Learning in 2017 from Carnegie Mellon University (CMU). His recent research interests include offline reinforcement learning, adaptive online learning, theory of deep learning and differential privacy.  His work had been supported by an NSF CAREER Award, Amazon ML Research Award, Google Research Scholar Award,  Adobe Data Science Research Award and had received paper awards from KDD’15, AISTATS'19 and COLT’21. 

 

This seminar is supported with funds from the Korhammer Lecture Series

Sponsors
  • Electrical and Computer Engineering
  • Center for Statistics and Machine Learning
Speaker