Home About Publications Software Teaching

Robustness, Adaptation, and Learning in Optimal Control

I. Papusha

Abstract

Recent technological advances have opened the door to a wide variety of dynamic control applications, which are enabled by increasing computational power in ever smaller devices. These advances are backed by reliable optimization algorithms that allow specification, synthesis, and embedded implementation of sophisticated learning-based controllers. However, as control systems become more pervasive, dynamic, and complex, the control algorithms governing them become more complex to design and analyze. In many cases, optimal control policies are practically impossible to determine unless the state dimension is small, or the dynamics are simple. Thus, in order to make implementation progress, the control designer must specialize to suboptimal architectures and approximate control. The major engineering challenge in the upcoming decades will be how to cope with the complexity of designing implementable control architectures for these smart systems while certifying their safety, robustness, and performance.

This thesis tackles the design and verification complexity by carefully employing tractable lower and upper bounds on the Lyapunov function, while making connections to robust control, formal synthesis, and machine learning. Specifically, optimization-based upper bounds are used to specify robust controllers, while lower bounds are used to obtain performance bounds and to synthesize approximately optimal policies. Implementation of these bounds depends critically on carrying out learning and optimization in the loop. Examples in aerospace, formal methods, hybrid systems, and networked adaptive systems are given, and novel sources of identifiability and persistence of excitation are discussed.

Citation

I. Papusha. Robustness, Adaptation, and Learning in Optimal Control. Ph.D. thesis, California Institute of Technology, 2016.