N4-fields: Neural network nearest neighbor fields for image transforms

Yaroslav Ganin, Victor Lempitsky

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    52 Citations (Scopus)

    Abstract

    We propose a new architecture for difficult image processing operations, such as natural edge detection or thin object segmentation. The architecture is based on a simple combination of convolutional neural networks with the nearest neighbor search. We focus our attention on the situations when the desired image transformation is too hard for a neural network to learn explicitly. We show that in such situations the use of the nearest neighbor search on top of the network output allows to improve the results considerably and to account for the underfitting effect during the neural network training. The approach is validated on three challenging benchmarks, where the performance of the proposed architecture matches or exceeds the stateof- the-art.

    Original languageEnglish
    Title of host publicationComputer Vision - ACCV 2014 - 12th Asian Conference on Computer Vision, Revised Selected Papers
    EditorsMing-Hsuan Yang, Hideo Saito, Daniel Cremers, Ian Reid
    PublisherSpringer Verlag
    Pages536-551
    Number of pages16
    ISBN (Print)9783319168074
    DOIs
    Publication statusPublished - 2015
    Event12th Asian Conference on Computer Vision, ACCV 2014 - Singapore, Singapore
    Duration: 1 Nov 20145 Nov 2014

    Publication series

    NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    Volume9004
    ISSN (Print)0302-9743
    ISSN (Electronic)1611-3349

    Conference

    Conference12th Asian Conference on Computer Vision, ACCV 2014
    Country/TerritorySingapore
    CitySingapore
    Period1/11/145/11/14

    Fingerprint

    Dive into the research topics of 'N4-fields: Neural network nearest neighbor fields for image transforms'. Together they form a unique fingerprint.

    Cite this