Research

In control theory, complicated dynamics such as systems of (nonlinear) differential equations are mostly controlled to achieve stability and to optimize a cost. In the area of formal methods in computer science, simple systems such as finite state transition graphs modeling computer programs and  digital circuits are verified and controlled from specifications such as safety, liveness, or richer requirements expressed as formulas of temporal logics.  Our group bridges the gaps among  control theory, formal methods, and machine learning to develop verification and control synthesis tools for  safety-critical cyber physical and  data-driven systems. We use our algorithms to solve challenging problems in robotics and biology.


Research Themes

Safety-Critical Control

Optimal control of autonomous systems is a fundamental and challenging problem, especially when many stringent safety constraints and tight control limitations are involved. It has been shown that optimizing quadratic costs while stabilizing an affine control system to a desired (sets of) states subject to state and control constraints can be reduced to a sequence of Quadratic Programs (QPs) by using Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs). Although computationally efficient, this method is limited by several factors, which we have been addressing during the past four years. We have extended CBFs to high order CBFs (HOCBFs) that can accommodate arbitrary relative degree systems and constraints. We have also proposed analytical methods to derive sufficient conditions that guarantee the QP feasibility. For complex safety constraints, we developed machine learning techniques to learn HOCBF feasibility constraints. When time-varying control bounds and noisy dynamics are involved, we proposed adaptive CBFs. For uncertain systems, we have developed data-driven approaches that ensure safety by extending techniques from adaptive control.

Wei Xiao, Christos G. Cassandras, and Calin Belta, Safe Autonomy with Control Barrier Functions: Theory and Applications, Springer, 2023 (link)

Max H. Cohen, Calin Belta, Adaptive and Learning-based Control of Safety-Critical Systems, Springer, 2023 (link)

Safe, Interpretable, and Composable Reinforcement Learning

Growing interest in reinforcement learning approaches to robotic planning and control raises concerns of predictability and safety of robot behaviors realized solely through learned control policies. In addition, formally defining reward functions for complex tasks is challenging, and faulty rewards are prone to exploitation by the learning agent. We develop formal methods approaches to reinforcement learning that provide (1) formal specification languages that integrate high-level, rich, task specifications with a priori, domain-specific knowledge;  (2) make the reward generation process easily interpretable; (3) guide the policy generation process according to the specification; and (4) guarantee the satisfaction of the (critical) safety component of the specification.

Wenliang Liu, Mirai Duintjer Tebbens Nishioka, Calin Belta, Safe Model-based Control from Signal Temporal Logic Specifications Using Recurrent Neural Networks, IEEE International Conference  on Robotics and Automation (ICRA), 2023 (pdf)

Xiao Li, Zachary Serlin, Guang Yang, Calin Belta, A Formal Methods Approach to Interpretable Reinforcement Learning for Robotic Planning, Science: Robotics, vol. 4, issue 37, 2019 (pdf)