MaxEntropy Pursuit Variational Inference

Evgenii Egorov, Kirill Neklydov, Ruslan Kostoev, Evgeny Burnaev

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    1 Citation (Scopus)


    One of the core problems in variational inference is a choice of approximate posterior distribution. It is crucial to trade-off between efficient inference with simple families as mean-field models and accuracy of inference. We propose a variant of a greedy approximation of the posterior distribution with tractable base learners. Using Max-Entropy approach, we obtain a well-defined optimization problem. We demonstrate the ability of the method to capture complex multimodal posterior via continual learning setting for neural networks.

    Original languageEnglish
    Title of host publicationAdvances in Neural Networks – ISNN 2019 - 16th International Symposium on Neural Networks, ISNN 2019, Proceedings
    EditorsHuchuan Lu, Huajin Tang, Zhanshan Wang
    PublisherSpringer Verlag
    Number of pages9
    ISBN (Print)9783030227951
    Publication statusPublished - 2019
    Event16th International Symposium on Neural Networks, ISNN 2019 - Moscow, Russian Federation
    Duration: 10 Jul 201912 Jul 2019

    Publication series

    NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    Volume11554 LNCS
    ISSN (Print)0302-9743
    ISSN (Electronic)1611-3349


    Conference16th International Symposium on Neural Networks, ISNN 2019
    Country/TerritoryRussian Federation


    • Bayesian Inference
    • Deep learning
    • Maximum Entropy
    • Variational inference


    Dive into the research topics of 'MaxEntropy Pursuit Variational Inference'. Together they form a unique fingerprint.

    Cite this