Learning deep embeddings with histogram loss

Evgeniya Ustinova, Victor Lempitsky

    Research output: Contribution to journalConference articlepeer-review

    217 Citations (Scopus)


    We suggest a loss for learning deep embeddings. The new loss does not introduce parameters that need to be tuned and results in very good embeddings across a range of datasets and problems. The loss is computed by estimating two distribution of similarities for positive (matching) and negative (non-matching) sample pairs, and then computing the probability of a positive pair to have a lower similarity score than a negative pair based on the estimated similarity distributions. We show that such operations can be performed in a simple and piecewise-differentiable manner using 1D histograms with soft assignment operations. This makes the proposed loss suitable for learning deep embeddings using stochastic optimization. In the experiments, the new loss performs favourably compared to recently proposed alternatives.

    Original languageEnglish
    Pages (from-to)4177-4185
    Number of pages9
    JournalAdvances in Neural Information Processing Systems
    Publication statusPublished - 2016
    Event30th Annual Conference on Neural Information Processing Systems, NIPS 2016 - Barcelona, Spain
    Duration: 5 Dec 201610 Dec 2016


    Dive into the research topics of 'Learning deep embeddings with histogram loss'. Together they form a unique fingerprint.

    Cite this