Rapid advances in deep learning have resulted in promising techniques for robots to boost their capabilities to perceive, reason, and act by leveraging large models and massive datasets. However, the scalability of existing robot learning methods is severely limited by the manual labor and domain knowledge that humans can provide. To acquire general-purpose skills for solving a broad range of tasks, intelligent robots need scalable methods to collect and learn from rich data without extensive human supervision.
In this talk, I will present my research on scaling up robot learning through the autonomous generation of environments, goals, and tasks. I will start by describing how to leverage procedural content generation for learning robust skills that can handle the variety and uncertainty of the real world. Then I will present algorithms that train robots to effectively reuse skills learned from prior experiences for novel sequential tasks by learning to generate reachable subgoals. Finally, I will demonstrate how to enable robots to discover a repertoire of novel skills by adaptively generating tasks during training. The acquired skills can be used for solving a variety of complex tasks such as tool use and sequential manipulation based on raw sensory inputs.
Kuan Fang is a postdoctoral researcher in the Department of Electrical Engineering and Computer Sciences at UC Berkeley, working with Sergey Levine. He received his Ph.D. degree in Electrical Engineering from Stanford University, advised by Fei-Fei Li and Silvio Savarese. His research interests lie at the intersection of robotics, computer vision, and machine learning, with a focus on developing data-driven methods to enable intelligent robots to operate in unstructured environments. He is a recipient of the Stanford Graduate Fellowship and the Computing Innovation Fellowship.