Robots and other dynamical systems are increasingly becoming intelligent with expanding capabilities. The expansion necessitates the development of fail-safe methods to prevent any hazards. There are existing tools in optimization-based control to provide safety certificates with a known system model. Despite model inaccuracies, extending these frameworks to incorporate robustness can endow the controllers with strong safety certificates. Specifically, tools such as Hamilton-Jacobi reachability analysis and control barrier functions (CBFs) can generate safety certificates. While both have their benefits, the HJ reachability formulation is notably optimal and requires less manual design. Nevertheless, the following drawbacks exist.
Firstly, most of these techniques need both the model and the complete knowledge of the environment to precompute the safety recipe before performing any task. The Hamilton-Jacobi value function and the optimal safe controls are computed offline. This optimal control is consequently used in a 'least-restrictive' framework. The least-restrictive fallback mechanism often causes jerky behavior, which results in wear and tear. Secondly, these safety frameworks may be overly conservative when used in a robust game-theoretic formulation.
To tackle the first set of issues, we propose to compute a run-time safety assurance that can adapt to dynamic changes in the environment. We use receding horizon differential dynamic programming (DDP) to compute the fallback safety recipe. The safety filtering is consequently done using a CBF-style constraint that enables a smooth fallback to safety. These powerful techniques are thus integrated with a novel framework to provide a provably safe, online, smooth safety fallback.
To tackle the second issue, we propose to use Gaussian Processes to model the residual dynamics in a low-dimensional space. Gaussian Process (GP) is a highly flexible, non-parametric learning mechanism capable of quantifying uncertainty. The GP covariance kernel is cleverly designed to reflect the structure in model errors. Using these uncertainty estimates with a novel convex relaxation provides a framework for online safety certification, which can adjust conservativeness in the safety recipe.
These mechanisms can potentially be integrated to provide robust online safety certificates in the presence of system modeling errors while guaranteeing an adaptive level of performance.
Zoom: https://princeton.zoom.us/j/92687547943
Adviser: Peter Ramadge
|