Black-box learning of multigrid parameters

Alexandr Katrutsa, Talgat Daulbaev, Ivan Oseledets

    Research output: Contribution to journalArticlepeer-review

    11 Citations (Scopus)


    This paper studies the optimality of the restriction and prolongation operators in the geometric multigrid method (GMG). GMG is used in solving discretized partial differential equation (PDE) and it relies greatly on the restriction and prolongation operators. Many methods to find these operators were proposed, but most of them have limited optimality proofs. To study their optimality we introduce stochastic convergence functional, which estimates the spectral radius of the iteration matrix for given GMG parameters. We implement the GMG method in a modern machine learning framework that can automatically compute the gradients of the introduced convergence functional with respect to restriction and prolongation operators. Therefore, we can minimize the proposed functional starting from some initial parameters and get better ones after some iterations of stochastic gradient descent. To illustrate the performance of the proposed approach, we carry out experiments on the discretized Poisson equation, Helmholtz equation and singularly perturbed convection–diffusion equation and demonstrate that proposed approach gives operators, which lead to faster convergence.

    Original languageEnglish
    Article number112524
    JournalJournal of Computational and Applied Mathematics
    Publication statusPublished - Apr 2020


    • Automatic differentiation
    • Geometric multigrid method
    • Helmholtz equation
    • Poisson equation
    • Spectral radius minimization


    Dive into the research topics of 'Black-box learning of multigrid parameters'. Together they form a unique fingerprint.

    Cite this