Fast nonnegative matrix/tensor factorization based on low-rank approximation

Guoxu Zhou, Andrzej Cichocki, Shengli Xie

Research output: Contribution to journalReview articlepeer-review

113 Citations (Scopus)


Nonnegative matrix factorization (NMF) algorithms often suffer from slow convergence speed due to the nonnegativity constraints, especially for large-scale problems. Low-rank approximation methods such as principle component analysis (PCA) are widely used in matrix factorizations to suppress noise, reduce computational complexity and memory requirements. However, they cannot be applied to NMF directly so far as they result in factors with mixed signs. In this paper, low-rank approximation is introduced to NMF (named lraNMF), which is not only able to reduce the computational complexity of NMF algorithms significantly, but also suppress bipolar noise. In fact, the new update rules are typically about MR times faster than traditional ones of NMF, here M is the number of observations and R is the low rank of latent factors. Therefore lraNMF is particularly efficient in the case where R M, which is the general case in NMF. The proposed update rules can also be incorporated into most existing NMF algorithms straightforwardly as long as they are based on Euclidean distance. Then the concept of lraNMF is generalized to the tensor field to perform a fast sequential nonnegative Tucker decomposition (NTD). By applying the proposed methods, the practicability of NMF/NTD is significantly improved. Simulations on synthetic and real data show the validity and efficiency of the proposed approaches.

Original languageEnglish
Article number6166354
Pages (from-to)2928-2940
Number of pages13
JournalIEEE Transactions on Signal Processing
Issue number6
Publication statusPublished - Jun 2012
Externally publishedYes


  • Low-rank approximation
  • Nonnegative matrix factorization (NMF)
  • Nonnegative Tucker decomposition (NTD)
  • Principle component analysis (PCA)


Dive into the research topics of 'Fast nonnegative matrix/tensor factorization based on low-rank approximation'. Together they form a unique fingerprint.

Cite this