Model predictive controllers are commonly associated with a fixed running and/or terminal cost function. Recently, some possibilities of cost function adaptation inspired by reinforcement learning were investigated. The current study analyzes closed-loop stability of such controllers in a general way. It is shown what constraints on learned running and terminal cost are required for this sake. A particular feature of the suggested control scheme is that, unlike in some common model predictive controllers, an assumed local Lyapunov function does not have to satisfy a decay function not less than the running cost. Relation of the considered control scheme to a baseline model predictive controller and adaptive dynamic programming is discussed. In a case study, it is shown how different cost function adaptation schemes lead to different performance with respect to the infinite-horizon cost.