Font Size: a A A

Design And Implementation Of On - Line Emotional Speech Recognition System

Posted on:2016-04-18Degree:MasterType:Thesis
Country:ChinaCandidate:Y S LiFull Text:PDF
GTID:2208330461987261Subject:Software engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of computer technology in recent years, human rely increasingly on computers.As a result, the human-computer interaction requirements are also getting higher and higher. Speech is the most commonly used method of human-computer interaction, the traditional human-computer interaction is to focus on the expression of content, while ignoring the emotional factors.However, in order to make the human-computer interaction becomes more harmonious, emotional factors are the conditions that must be considered. As a branch of speech recognition, emotional speech recognition has become a hot topic of research in recent years. The main goal of emotional speech recognition is to extract the bearing emotional information of speech, so that the machine can identify the human speech emotion, and make a corresponding response, realize the harmonious human-computer interaction in finally.This paper firstly introduces the background of emotional speech research and significance, and then introduces the application prospect of emotional speech. Next, the main work of this paper are as follows:(1)This paper introduces the emotional speech classification method of domestic and foreign and the commonly used emotion recognition classifier method, then this paper established an emotional speech database of happy, anger, drunk and natural four kind of emotion based on the understanding of the emotional type.And retain the obvious characteristics of emotional of emotional speech, remove the emotional speech without the obvious characteristics of emotional. To guarantee the quality of the model training and recognition of speech.(2) After the emotional speech database establishment, this paper introduces the emotional features of need to be extracted. And extracted short-time energy, short-time amplitude, pitch frequency and MFCC feature. Then this paper did the emotional speech recognition experiments according to the different phonetic features.(3) This paper introduces the modeling idea of subspace method, and introduces the three commonly used methods of subspace to model initialization, parameter training and emotion recognition, then carried on the contrast experiment of subspace method. At the end of this pape implement online emotional speech recognition system based on the average learning subspace method...
Keywords/Search Tags:Emotional speech database, Average learning subspace method, Emotion recognition, Emotion classification, Characteristic parameters
PDF Full Text Request
Related items