Information backpropagation for blind separation of sources in nonlinear mixture

Howard H. Yang, Shun Ichi Amari, Andrzej Cichocki

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

27 Citations (Scopus)

Abstract

The linear mixture model is assumed in most of the papers devoted to independent component analysis. A more realistic model for mixture should be nonlinear. In this paper, a two layer perceptron is used as a de-mixing system to extract sources in nonlinear mixture. The learning algorithms for the de-mixing system are derived by two approaches: maximum entropy and minimum mutual information. The algorithms derived from the two approaches have a common structure. The new learning equations for the hidden layer are different from our previous learning equations for the output layer. The natural gradient descent method is applied in maximizing entropy and minimizing mutual information. The information (entropy or mutual information) backpropagation method is proposed to derive the learning equations for the hidden layer.

Original languageEnglish
Title of host publication1997 IEEE International Conference on Neural Networks, ICNN 1997
Pages2141-2146
Number of pages6
DOIs
Publication statusPublished - 1997
Externally publishedYes
Event1997 IEEE International Conference on Neural Networks, ICNN 1997 - Houston, TX, United States
Duration: 9 Jun 199712 Jun 1997

Publication series

NameIEEE International Conference on Neural Networks - Conference Proceedings
Volume4
ISSN (Print)1098-7576

Conference

Conference1997 IEEE International Conference on Neural Networks, ICNN 1997
Country/TerritoryUnited States
CityHouston, TX
Period9/06/9712/06/97

Fingerprint

Dive into the research topics of 'Information backpropagation for blind separation of sources in nonlinear mixture'. Together they form a unique fingerprint.

Cite this