Video denoising using low rank tensor decomposition

Lihua Gui, Gaochao Cui, Qibin Zhao, Dongsheng Wang, Andrzej Cichocki, Jianting Cao

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Citations (Scopus)

Abstract

Reducing noise in a video sequence is of vital important in many real-world applications. One popular method is block matching collaborative filtering. However, the main drawback of this method is that noise standard deviation for the whole video sequence is known in advance. In this paper, we present a tensor based denoising framework that considers 3D patches instead of 2D patches. By collecting the similar 3D patches non-locally, we employ the low-rank tensor decomposition for collaborative filtering. Since we specify the non-informative prior over the noise precision parameter, the noise variance can be inferred automatically from observed video data. Therefore, our method is more practical, which does not require knowing the noise variance. The experimental on video denoising demonstrates the effectiveness of our proposed method.

Original languageEnglish
Title of host publicationNinth International Conference on Machine Vision, ICMV 2016
EditorsDmitry P. Nikolaev, Antanas Verikas, Jianhong Zhou, Petia Radeva, Wei Zhang
PublisherSPIE
ISBN (Electronic)9781510611313
DOIs
Publication statusPublished - 2017
Externally publishedYes
Event9th International Conference on Machine Vision, ICMV 2016 - Nice, France
Duration: 18 Nov 201620 Nov 2016

Publication series

NameProceedings of SPIE - The International Society for Optical Engineering
Volume10341
ISSN (Print)0277-786X
ISSN (Electronic)1996-756X

Conference

Conference9th International Conference on Machine Vision, ICMV 2016
Country/TerritoryFrance
CityNice
Period18/11/1620/11/16

Keywords

  • collaborative filtering.
  • low rank tensor decomposition
  • tensor rank
  • Video denoising

Fingerprint

Dive into the research topics of 'Video denoising using low rank tensor decomposition'. Together they form a unique fingerprint.

Cite this