A reinforcement learning method with closed-loop stability guarantee for systems with unknown parameters

Thomas Göhrt, Fritjof Griesing-Scheiwe, Pavel Osinenko, Stefan Streif

Результат исследований: Вклад в журналСтатья конференциирецензирование

1 Цитирования (Scopus)

Аннотация

This work is concerned with the application of reinforcement learning (RL) techniques to adaptive dynamic programming (ADP) for systems with partly unknown models. In ADP, one seeks to approximate an optimal infinite horizon cost function, the value function. Such an approximation, i. e., critic, does not in general yield a stabilizing control policies, i. e., stabilizing actors. Guaranteeing stability of nonlinear systems under RL/ADP is still an open issue. In this work, it is suggested to use a stability constraint directly in the actor-critic structure. The system model considered in this work is assumed to be only partially known, specifically, it contains an unknown parameter vector. A suitable stabilizability assumption for such systems is an adaptive Lyapunov function, which is commonly assumed in adaptive control. The current approach formulates a stability constraint based on an adaptive Lyapunov function to ensure closed-loop stability. Convergence of the actor and critic parameters in a suitable sense is shown. A case study demonstrates how the suggested algorithm preserves closed-loop stability, while at the same time improving an infinite-horizon performance.

Язык оригиналаАнглийский
Страницы (с-по)8157-8162
Число страниц6
ЖурналIFAC-PapersOnLine
Том53
Номер выпуска2
DOI
СостояниеОпубликовано - 2020
Опубликовано для внешнего пользованияДа
Событие21st IFAC World Congress 2020 - Berlin, Германия
Продолжительность: 12 июл. 202017 июл. 2020

Fingerprint

Подробные сведения о темах исследования «A reinforcement learning method with closed-loop stability guarantee for systems with unknown parameters». Вместе они формируют уникальный семантический отпечаток (fingerprint).

Цитировать