TY - GEN

T1 - Advances in PARAFAC using parallel block decomposition

AU - Phan, Anh Huy

AU - Cichocki, Andrzej

PY - 2009

Y1 - 2009

N2 - Parallel factor analysis (PARAFAC) is a multi-way decomposition method which allows to find hidden factors from the raw tensor data with many potential applications in neuroscience, bioinformatics, chemometrics etc [1,2]. The Alternating Least Squares (ALS) algorithm can explain the raw tensor by a small number of rank-one tensors with a high fitness. However, for large scale data, due to necessity to compute Khatri-Rao products of long factors, and multiplication of large matrices, existing algorithms require high computational cost and large memory. Hence decomposition of large-scale tensor is still a challenging problem for PARAFAC. In this paper, we propose a new algorithm based on the ALS algorithm which computes Hadamard products and small matrices, instead of Khatri-Rao products. The new algorithm is able to process extremely large-scale tensor with billions of entries in parallel. Extensive experiments confirm the validity and high performance of the developed algorithm in comparison with other well-known algorithms.

AB - Parallel factor analysis (PARAFAC) is a multi-way decomposition method which allows to find hidden factors from the raw tensor data with many potential applications in neuroscience, bioinformatics, chemometrics etc [1,2]. The Alternating Least Squares (ALS) algorithm can explain the raw tensor by a small number of rank-one tensors with a high fitness. However, for large scale data, due to necessity to compute Khatri-Rao products of long factors, and multiplication of large matrices, existing algorithms require high computational cost and large memory. Hence decomposition of large-scale tensor is still a challenging problem for PARAFAC. In this paper, we propose a new algorithm based on the ALS algorithm which computes Hadamard products and small matrices, instead of Khatri-Rao products. The new algorithm is able to process extremely large-scale tensor with billions of entries in parallel. Extensive experiments confirm the validity and high performance of the developed algorithm in comparison with other well-known algorithms.

UR - http://www.scopus.com/inward/record.url?scp=76649115614&partnerID=8YFLogxK

U2 - 10.1007/978-3-642-10677-4_36

DO - 10.1007/978-3-642-10677-4_36

M3 - Conference contribution

AN - SCOPUS:76649115614

SN - 3642106765

SN - 9783642106767

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 323

EP - 330

BT - Neural Information Processing - 16th International Conference, ICONIP 2009, Proceedings

T2 - 16th International Conference on Neural Information Processing, ICONIP 2009

Y2 - 1 December 2009 through 5 December 2009

ER -