Font Size: a A A

The Correlation Analysis Of Audio Signal Emotion Recognition And The Audience's EEG Signal

Posted on:2019-01-07Degree:MasterType:Thesis
Country:ChinaCandidate:S Q ChenFull Text:PDF
GTID:2348330545992119Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
In the communication between man and nature,include man and others,audio signals transmit signals and express the function of information.The music makers and record company is extremely concerned about the problem that what kind of different emotions could lead to what different audio experience,in order to obtain the success or improve the anticipation of the release of the record and music and to avoid the loss.At the same time,that emotion calculation is also a hotspot of researching human-computer interaction and exploring human emotions.EEG signal is a typical signal of human biology,containing generous emotional information and more emotional changes.In recent years,with the progress of science and technology,the extraction of EEG characteristics of emotion has aroused extensive research because of extraction of EEG signals is more convenient.Under this background,this study extracted the typical characteristics of widely different emotional audio signals and the corresponding EEG signals,then we quantified analysis and evaluate between different emotional audio signals and the EEG of different audiences by using data analysis.The specific contents are as follows:(1)Firstly,using the CASIA database in voice signal and 80 different emotions be marked label(anger,joy,sadness,calm)by using T test.Extraction of typical 64 dimensional features present as voice signal feature.Then 138 statistical features has been extraction by using the Mirtoolbox,including the waveform and spectrum characteristics,tone,intonation features mean and variance,slope,periodic the frequency,amplitude,periodic entropy,peak centroid and so on.Last,a variety of dimensionality reduction methods are used to reduce the dimension of the original data,and the that features are verified by multiple classifiers.That results show that the 10 dimensional feature of extracted voice signals and the 8 dimensional features of the music signals are preeminent.(2)This paper extracted 27 dimensional features for each 12 electrodes which are related with human emotions.Then,Correlation-Based Feature Selection(CFS)was employed to select the feature set which is most closely related to original features but with smallest redundancy.Some classifiers were used to test the recognition rates of the original feature set and the selected feature set.Finally,the experiment results were analyzed in details to get the effect from the features in the selected feature set to different human emotions.(3)Combined with the above experimental results,the characteristics of the music signal and the EEG features caused by it are analyzed jointly.The experiment selects the original joint features of 20 dimension and the 6 dimension features through the GA+CFS method reducedimension,then classifies it with a variety of classifiers respectively.The experimental results show that the above 6 dimensional original features can be used to classify the electroencephalogram.For classifier performance,the LDA classifier or the C4.5 classifier is better.The recognition rate of BP classifier is more than 80%,but the BP classifier is limited by the number of network training times in practical application,because of the modeling speed is slow.(4)A speech emotion recognition system is designed with the LabVIEW platform and Matlab,which is used for speech and music emotion recognition.
Keywords/Search Tags:Emotion calculation, Feature selection, Classifier
PDF Full Text Request
Related items