A reinforcement learning method with closed-loop stability guarantee for systems with unknown parameters

Thomas Göhrt, Fritjof Griesing-Scheiwe, Pavel Osinenko, Stefan Streif

Research output: Contribution to journalConference articlepeer-review

1 Citation (Scopus)

Abstract

This work is concerned with the application of reinforcement learning (RL) techniques to adaptive dynamic programming (ADP) for systems with partly unknown models. In ADP, one seeks to approximate an optimal infinite horizon cost function, the value function. Such an approximation, i. e., critic, does not in general yield a stabilizing control policies, i. e., stabilizing actors. Guaranteeing stability of nonlinear systems under RL/ADP is still an open issue. In this work, it is suggested to use a stability constraint directly in the actor-critic structure. The system model considered in this work is assumed to be only partially known, specifically, it contains an unknown parameter vector. A suitable stabilizability assumption for such systems is an adaptive Lyapunov function, which is commonly assumed in adaptive control. The current approach formulates a stability constraint based on an adaptive Lyapunov function to ensure closed-loop stability. Convergence of the actor and critic parameters in a suitable sense is shown. A case study demonstrates how the suggested algorithm preserves closed-loop stability, while at the same time improving an infinite-horizon performance.

Original languageEnglish
Pages (from-to)8157-8162
Number of pages6
JournalIFAC-PapersOnLine
Volume53
Issue number2
DOIs
Publication statusPublished - 2020
Externally publishedYes
Event21st IFAC World Congress 2020 - Berlin, Germany
Duration: 12 Jul 202017 Jul 2020

Keywords

  • Consensus and Reinforcement learning control
  • Nonlinear adaptive control

Fingerprint

Dive into the research topics of 'A reinforcement learning method with closed-loop stability guarantee for systems with unknown parameters'. Together they form a unique fingerprint.

Cite this