In adaptation of reservoir models a direct gradient backpropagation through the forward model is oftenintractable or requires enormous computational costs. Thus one have to construct separate models thatsimulate them implicitly, e.g. via stochastic sampling or solving of adjoint systems. We demonstrate that ifthe forward model is a neural network, gradient backpropagation becomes naturally involved both in modeltraining and adaptation. In our research we compare 3 adaptation strategies: variation of reservoir modelvariables, neural network adaptation and latent space adaptation and discuss to what extent they preserve thegeological content. We exploit a real-world reservoir model to investigate the problem in practical case. Thenumerical experiments demonstrate that the latent space adaptation provides the most stable and accurateresults.