1.Data-based \(\mathcal{L}_2\) gain optimal control for discrete-time system with unknown dynamics

Published in Journal of the Franklin Institute, 360(6): 4354-4377, 2023

Jiamin Wang, Jian Liu, Yuanshi Zheng, and Dong Zhang

This paper considers the \(\mathcal{L}_2\) gain optimal problem for a class of discrete-time linear time-invariant with state-disturbance feedback controller and unknown system dynamics. Firstly, for a given stabilizing control policy, we establish the relation between the \(\mathcal{L}_2\) gain and a sequence of lower triangle Toeplitz matrices. Meanwhile, we show that the upper bound of optimal \(\mathcal{L}_2\) gain is proportional to the linear correlation degree between the input and disturbance matrices. Secondly, to overcome the obstacle arising from the unknown system dynamics, a data-based reinforcement learning scheme is developed for the optimal control policy by using linear matrix inequality technique and Q-learning with policy iteration. Under certain conditions, we prove that either the reinforcement learning process ends in a finite number of iterations, or the \(\mathcal{L}_2\) gain sequence is strictly monotonically convergent along the iteration axis provided that the disturbance data set can fully activate the closed-loop system. Finally, simulations are given to illustrate the effectiveness of our findings.

Download Paper