## Abstract

In this letter, we analyze a two-stage cluster-then-l ^{1}-optimization approach for sparse representation of a data matrix, which is also a promising approach for blind source separation (BSS) in which fewer sensors than sources are present. First, sparse representation (factorization) of a data matrix is discussed. For a given overcomplete basis matrix, the corresponding sparse solution (coefficient matrix) with minimum l^{1} norm is unique with probability one, which can be obtained using a standard linear programming algorithm. The equivalence of the l ^{1}-norm solution and the l^{0}-norm solution is also analyzed according to a probabilistic framework. If the obtained l^{1}-norm solution is sufficiently sparse, then it is equal to the l^{0}-norm solution with a high probability. Furthermore, the l^{1}-norm solution is robust to noise, but the l^{0}-norm solution is not, showing that the l^{1}-norm is a good sparsity measure. These results can be used as a recoverability analysis of BSS, as discussed. The basis matrix in this article is estimated using a clustering algorithm followed by normalization, in which the matrix columns are the cluster centers of normalized data column vectors. Zibulevsky, Pearlmutter, Boll, and Kisilev (2000) used this kind of two-stage approach in underdetermined BSS. Our recoverability analysis shows that this approach can deal with the situation in which the sources are overlapped to some degree in the analyzed domain and with the case in which the source number is unknown. It is also robust to additive noise and estimation error in the mixing matrix. Finally, four simulation examples and an EEG data analysis example are presented to illustrate the algorithm's utility and demonstrate its performance.

Original language | English |
---|---|

Pages (from-to) | 1193-1234 |

Number of pages | 42 |

Journal | Neural computation |

Volume | 16 |

Issue number | 6 |

DOIs | |

Publication status | Published - Jun 2004 |

Externally published | Yes |