TY - JOUR

T1 - Nonnegative matrix factorization with constrained second-order optimization

AU - Zdunek, Rafal

AU - Cichocki, Andrzej

PY - 2007/8

Y1 - 2007/8

N2 - Nonnegative matrix factorization (NMF) solves the following problem: find nonnegative matrices A ∈ R+M × R and X ∈ R+R × T such that Y ≅ AX, given only Y ∈ RM × T and the assigned index R. This method has found a wide spectrum of applications in signal and image processing, such as blind source separation (BSS), spectra recovering, pattern recognition, segmentation or clustering. Such a factorization is usually performed with an alternating gradient descent technique that is applied to the squared Euclidean distance or Kullback-Leibler divergence. This approach has been used in the widely known Lee-Seung NMF algorithms that belong to a class of multiplicative iterative algorithms. It is well known that these algorithms, in spite of their low complexity, are slowly convergent, give only a strictly positive solution, and can easily fall into local minima of a nonconvex cost function. In this paper, we propose to take advantage of the second-order terms of a cost function to overcome the disadvantages of gradient (multiplicative) algorithms. First, a projected quasi-Newton method is presented, where a regularized Hessian with the Levenberg-Marquardt approach is inverted with the Q-less QR decomposition. Since the matrices A and/or X are usually sparse, a more sophisticated hybrid approach based on the gradient projection conjugate gradient (GPCG) algorithm, which was invented by More and Toraldo, is adapted for NMF. The gradient projection (GP) method is exploited to find zero-value components (active), and then the Newton steps are taken only to compute positive components (inactive) with the conjugate gradient (CG) method. As a cost function, we used the α-divergence that unifies many well-known cost functions. We applied our new NMF method to a BSS problem with mixed signals and images. The results demonstrate the high robustness of our method.

AB - Nonnegative matrix factorization (NMF) solves the following problem: find nonnegative matrices A ∈ R+M × R and X ∈ R+R × T such that Y ≅ AX, given only Y ∈ RM × T and the assigned index R. This method has found a wide spectrum of applications in signal and image processing, such as blind source separation (BSS), spectra recovering, pattern recognition, segmentation or clustering. Such a factorization is usually performed with an alternating gradient descent technique that is applied to the squared Euclidean distance or Kullback-Leibler divergence. This approach has been used in the widely known Lee-Seung NMF algorithms that belong to a class of multiplicative iterative algorithms. It is well known that these algorithms, in spite of their low complexity, are slowly convergent, give only a strictly positive solution, and can easily fall into local minima of a nonconvex cost function. In this paper, we propose to take advantage of the second-order terms of a cost function to overcome the disadvantages of gradient (multiplicative) algorithms. First, a projected quasi-Newton method is presented, where a regularized Hessian with the Levenberg-Marquardt approach is inverted with the Q-less QR decomposition. Since the matrices A and/or X are usually sparse, a more sophisticated hybrid approach based on the gradient projection conjugate gradient (GPCG) algorithm, which was invented by More and Toraldo, is adapted for NMF. The gradient projection (GP) method is exploited to find zero-value components (active), and then the Newton steps are taken only to compute positive components (inactive) with the conjugate gradient (CG) method. As a cost function, we used the α-divergence that unifies many well-known cost functions. We applied our new NMF method to a BSS problem with mixed signals and images. The results demonstrate the high robustness of our method.

KW - Blind source separation

KW - Fixed-point algorithm

KW - GPCG

KW - Nonnegative matrix factorization

KW - Quasi-Newton method

KW - Second-order optimization

UR - http://www.scopus.com/inward/record.url?scp=34247173538&partnerID=8YFLogxK

U2 - 10.1016/j.sigpro.2007.01.024

DO - 10.1016/j.sigpro.2007.01.024

M3 - Article

AN - SCOPUS:34247173538

VL - 87

SP - 1904

EP - 1916

JO - Signal Processing

JF - Signal Processing

SN - 0165-1684

IS - 8

ER -