Scaling Full-Stack Safety for Learning-Enabled Robot Autonomy

Pre-FPO Presentation
Dec 1, 2023, 3:00 pm4:00 pm
Equad E225 & Zoom (See Abstract for Link)



Event Description

The rapid advancement of machine learning and computation tools has brought promises of deploying fully autonomous robots beyond controlled factory floors. Ensuring their safe operation across various environments, particularly in uncertain, unforeseen, and unforgiving scenarios, is paramount. Traditional safety frameworks have conventionally focused solely on the planning and control module within the autonomy stack. However, this isolated approach can impair system performance, often imposing unnecessary information bottlenecks and compounded errors. Instead, the next generation of autonomous robots will need to examine safety from perception and localization, to learning and adaptation, to motion prediction, to planning and control.

This dissertation aims to lay down the foundations for ensuring the safety of learning-enabled autonomous systems in a way that scales to complex and unpredictable deployment conditions without inducing undue conservativeness by seamlessly integrating the full autonomy stack. In this talk, I will first introduce the overarching concept of a safety filter, which dynamically monitors and intervenes in the operation of autonomous systems to prevent catastrophic failures. We develop methods rooted in dynamic game theory and statistical learning theory to provide safety guarantees for complicated and uncertain environments under various degrees of prior knowledge. We also explore smooth runtime safety filters for unforeseen failure conditions. In the second part of the talk, we switch gears to integrate safety reasoning into the entire span of decision-making. Specifically, we formulate an interpretability framework for human motion prediction through counterfactual responsibility, by which the downstream planner can easily reason about other agents. We then test and verify the safety of the whole learning-enabled autonomy stack through adversarial but realistic scenario generation, which addresses the scarcity of safety-critical events in off-the-shelf datasets. The talk concludes by discussing future challenges and potential directions ahead, emphasizing tractable safety.

Adviser: Jaime Fernández Fisac

Zoom Link: