The purpose of this study is to clarify multi-modal brain processing related to human emotions. This study aimed to induce a controlled perturbation in the emotional system of the brain by multi-modal stimuli, and to investigate whether such emotional stimuli could induce reproducible and consistent changes in EEG signals. We exposed two subjects to auditory, visual, or combined audio-visual stimuli. Audio stimuli consisted of voice recordings of the Japanese word 'arigato' (thank you) pronounced with three different intonations (Angry - A, Happy - H or Neutral - N). Visual stimuli consisted of faces of women expressing the same emotional valences (A, H or N). Audio-visual stimuli were composed using either congruent combinations of faces and voices (e.g. H × H) or non-congruent (e.g. A × H). The data was collected with EEG system and analysis was performed by computing the topographic distributions of EEG signals in the theta, alpha and beta frequency ranges. We compared the conditions stimuli (A or H) vs. control (N), and congruent vs. non-congruent. Topographic maps of EEG power differed between those conditions on both subjects. The obtained results suggest that EEG could be used as a tool to investigate emotional valence and discriminate various emotions.