Speaker
Details
From autonomous vehicles navigating busy intersections to quadrupeds deployed in household environments, robots must operate safely and efficiently around people in uncertain and unstructured situations. However, today’s robots still struggle to robustly handle low-probability events without becoming overly conservative. In this talk, I will discuss how planning in the joint space of physical and information states (e.g., beliefs) allows robots to make safe, adaptive decisions in human-centered scenarios. I will begin by introducing a unified safety filter framework that combines robust safety analysis with probabilistic reasoning to enable trustworthy human–robot interaction. I will discuss how robots can reduce conservativeness without compromising safety by closing their interaction–learning loop. Next, I will show how game-theoretic reinforcement learning tractably synthesizes a safety filter for high-dimensional systems, guarantees training convergence, and reduces the policy's exploitability. Finally, I will present a scalable game-theoretic framework for optimizing social welfare and rapidly resolving decision ambiguity in multi-agent scenarios. I will conclude with a vision for next-generation human-centered robotic systems that actively align with their human peers and enjoy verifiable safety assurances.
Adviser: Jaime Fernández Fisac
Zoom Mtg: https://princeton.zoom.us/j/9800645471