Latent convolutional models

Shah Rukh Athar, Evgeny Burnaev, Victor Lempitsky

    Research output: Contribution to conferencePaperpeer-review

    7 Citations (Scopus)

    Abstract

    We present a new latent model of natural images that can be learned on large-scale datasets. The learning process provides a latent embedding for every image in the training dataset, as well as a deep convolutional network that maps the latent space to the image space. After training, the new model provides a strong and universal image prior for a variety of image restoration tasks such as large-hole inpainting, superresolution, and colorization. To model high-resolution natural images, our approach uses latent spaces of very high dimensionality (one to two orders of magnitude higher than previous latent image models). To tackle this high dimensionality, we use latent spaces with a special manifold structure (convolutional manifolds) parameterized by a ConvNet of a certain architecture. In the experiments, we compare the learned latent models with latent models learned by autoencoders, advanced variants of generative adversarial networks, and a strong baseline system using simpler parameterization of the latent space. Our model outperforms the competing approaches over a range of restoration tasks.

    Original languageEnglish
    Publication statusPublished - 2019
    Event7th International Conference on Learning Representations, ICLR 2019 - New Orleans, United States
    Duration: 6 May 20199 May 2019

    Conference

    Conference7th International Conference on Learning Representations, ICLR 2019
    Country/TerritoryUnited States
    CityNew Orleans
    Period6/05/199/05/19

    Fingerprint

    Dive into the research topics of 'Latent convolutional models'. Together they form a unique fingerprint.

    Cite this