## Abstract

Abstract: The problem of general non-linear stochastic optimal control with small Wiener noise is studied. The problem is approximated by a Markov Decision Process. Bellman Equation is solved using Value Iteration (VI) algorithm in the low rank Tensor Train format (TT-VI). In this paper a modification of the TT-VI algorithm called TT-Q-Iteration (TT-QI) is proposed by authors. In it, the nonlinear Bellman Optimality Operator is iteratively applied to the solution as a composition of internal Tensor Train algebraic operations and TT-CROSS algorithm. We show that it has lower asymptotic complexity per iteration than the method existing in the literature, provided that TT-ranks of transition probabilities are small. In test examples of an underpowered inverted pendulum and Dubins cars our method shows up to 3–10 times faster convergence in terms of wall clock time compared with the original method.

Original language | English |
---|---|

Pages (from-to) | 836-846 |

Number of pages | 11 |

Journal | Computational Mathematics and Mathematical Physics |

Volume | 61 |

Issue number | 5 |

DOIs | |

Publication status | Published - May 2021 |

## Keywords

- dynamic programming
- low rank decomposition
- Markov chain approximation
- Markov decision process
- MCA
- MDP
- optimal control