Font Size: a A A

Research And Build Of Multimodal Affective Database

Posted on:2014-01-24Degree:MasterType:Thesis
Country:ChinaCandidate:S P XuanFull Text:PDF
GTID:2248330395477457Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Affective computing is to give computer the ability to recognize, understand and accommodate to human emotions for achieving a more efficient and cordial human-computer interaction. Human emotions have some carriers such as facial expression, speech and physiological signals, conducting multimodal affective recognition can speed up development of affective computing, and we need a multimodal affective database. Most affective databases contain only one-modal emotional data, and a few multimodal affective databases have been built but still with some drawbacks. It’s theoretically important to design and build a multimodal affective database which contains7kinds of emotions, neutral, happy, surprise, disgust, sad, angry and fear.First of all, designing a solution to collect facial expression, speech and forehead EEG these three modal emotional signals synchronously, screening video materials to make emotional induced video, designing emotional speech contexts for every emotion.Then, setting up the emotional collecting experimental system, through the preliminary collection to find and solve problems in the original scheme; after that, conducting emotional acquisition experiment, collecting video and EEG signals containing16graduate students’ three modal emotional data synchronously, and recording everyone’s emotional state and strength in the each emotional sampling point.Afterwards, formulating the emotional data files’naming rules, intercepting facial expression pictures and speech sections with targeted emotion from the recorded video file, evaluating facial images’and noise-free speech segments’ emotional state, selecting facial expression pictures and emotional speeches with strong targeted emotion, then, selecting the face region from face images and gray processing it, and labeling emotional speeches’starting points and length, both of the test personnel’s subjective evaluation and emotional state of facial expression and speech at the current emotional sampling point that determining EEG signals’emotional state evaluation, and choosing EEG signals at the time of emotional express intensively with the help of starting point of the speech signals and time value in EEG recording files.Finally, getting a multimodal affective database containing333facial expression pictures,273emotional speeches, and230segments of forehead EEG signals.
Keywords/Search Tags:multimodal affective database, facial expression pictures, emotional speeches, forehead EEG signals
PDF Full Text Request
Related items