Effects of Sampling and Prediction Horizon in Reinforcement Learning

Pavel Osinenko, Dmitrii Dobriborsci

Research output: Contribution to journalArticlepeer-review


Plain reinforcement learning (RL) may be prone to loss of convergence, constraint violation, unexpected performance, etc. Commonly, RL agents undergo extensive learning stages to achieve proper functionality. This is in contrast to classical control algorithms, which are typically model-based. A direction of research is the fusion of RL with such algorithms, especially model-predictive control (MPC). This, however, introduces new hyper-parameters related to the prediction horizon. Furthermore, RL is usually concerned with Markov decision processes. Nevertheless, most of the real environments are not time-discrete. The factual physical setting of RL consists of a digital agent and a time-continuous dynamical system. There is thus, in fact, yet another hyper-parameter - the agent sampling time. In this paper, we investigate the effects of prediction horizon and sampling of two hybrid RL-MPC agents in a case study with a mobile robot parking, which is, in turn, a canonical control problem. We benchmark the agents with a simple variant of MPC. The sampling showed a 'sweet spot' behavior, whereas the RL agents demonstrated merits at shorter horizons.

Original languageEnglish
Pages (from-to)127611-127618
Number of pages8
JournalIEEE Access
Publication statusPublished - 2021


  • mobile robot
  • predictive control
  • Reinforcement learning
  • simulation


Dive into the research topics of 'Effects of Sampling and Prediction Horizon in Reinforcement Learning'. Together they form a unique fingerprint.

Cite this