When analysing multichannel processes, it is often convenient to use some sort of visualisation to help understand and interpret spatio-temporal dependencies between the channels, and to perform input variable selection. This is particularly advantageous when the levels of noise are high, the active channel changes its spatial location with time, and also for spatio-temporal processes where several channels contain meaningful information, such as in the case of electroencephalogram (EEG)-based brain activity monitoring. To provide insight into the dynamics of brain electrical responses, spatial sonification of multichannel EEG is performed, whereby the information from active channels is fused into music-like audio. Owing to its data fusion via fission mode of operation, empirical mode decomposition (EMD) is employed as a time-frequency analyser, and the brain responses to visual stimuli are sonified to provide audio feedback. Such perceptual feedback has enormous potential in multimodal brain computer and brain machine interfaces (BCI/BMI).
|Title of host publication||Signal Processing Techniques for Knowledge Extraction and Information Fusion|
|Number of pages||13|
|Publication status||Published - 2008|