Learnable visual markers

Oleg Grinchuk, Vadim Lebedev, Victor Lempitsky

    Research output: Contribution to journalConference articlepeer-review

    7 Citations (Scopus)


    We propose a new approach to designing visual markers (analogous to QR-codes, markers for augmented reality, and robotic fiducial tags) based on the advances in deep generative networks. In our approach, the markers are obtained as color images synthesized by a deep network from input bit strings, whereas another deep network is trained to recover the bit strings back from the photos of these markers. The two networks are trained simultaneously in a joint backpropagation process that takes characteristic photometric and geometric distortions associated with marker fabrication and marker scanning into account. Additionally, a stylization loss based on statistics of activations in a pretrained classification network can be inserted into the learning in order to shift the marker appearance towards some texture prototype. In the experiments, we demonstrate that the markers obtained using our approach are capable of retaining bit strings that are long enough to be practical. The ability to automatically adapt markers according to the usage scenario and the desired capacity as well as the ability to combine information encoding with artistic stylization are the unique properties of our approach. As a byproduct, our approach provides an insight on the structure of patterns that are most suitable for recognition by ConvNets and on their ability to distinguish composite patterns.

    Original languageEnglish
    Pages (from-to)4150-4158
    Number of pages9
    JournalAdvances in Neural Information Processing Systems
    Publication statusPublished - 2016
    Event30th Annual Conference on Neural Information Processing Systems, NIPS 2016 - Barcelona, Spain
    Duration: 5 Dec 201610 Dec 2016


    Dive into the research topics of 'Learnable visual markers'. Together they form a unique fingerprint.

    Cite this