Texture networks: Feed-forward synthesis of textures and stylized images

Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, Victor Lempitsky

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

256 Citations (Scopus)

Abstract

Gatys et al. recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods require a slow and memoryconsuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to Gatys et al., but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions.

Original languageEnglish
Title of host publication33rd International Conference on Machine Learning, ICML 2016
EditorsKilian Q. Weinberger, Maria Florina Balcan
PublisherInternational Machine Learning Society (IMLS)
Pages2027-2041
Number of pages15
ISBN (Electronic)9781510829008
Publication statusPublished - 2016
Externally publishedYes
Event33rd International Conference on Machine Learning, ICML 2016 - New York City, United States
Duration: 19 Jun 201624 Jun 2016

Publication series

Name33rd International Conference on Machine Learning, ICML 2016
Volume3

Conference

Conference33rd International Conference on Machine Learning, ICML 2016
Country/TerritoryUnited States
CityNew York City
Period19/06/1624/06/16

Fingerprint

Dive into the research topics of 'Texture networks: Feed-forward synthesis of textures and stylized images'. Together they form a unique fingerprint.

Cite this