| Depression has attracted much attention in recent years.It makes people depressed,negative thoughts,and even suicidal,resulting in serious adverse consequences.In this thesis,a multi-source information joint decision algorithm model is established by means of emotion recognition.The model is used to analyze the subject’s characterization data to assist in the diagnosis of depression.The main work is as follows:On the one hand,a deep learning based single-mode decision model for depressive disorder is studied.Considering the multifaceted clinical manifestations of patients with depression,this thesis divides the judgment of patients with depression into three parts:pronunciation,facial expression and text semantics.For speech,feature selection is taken as the improvement direction.Based on feature selection,convolutional neural network is used for deep learning to determine depression disorder.In the aspect of facial expression,we use the short and long duration memory network which can pay attention to the relevant information in time effectively.On this basis,we further use the bidirectional short and long duration memory network to improve the system,so that the information in the network can be used effectively at any time.In terms of text semantics,this thesis links the long and short duration memory network and the convolutional neural network in parallel in a two-channel way,this method can avoid losing too much effective information in the process,and can also pay attention to the global and local features.Considering the differences among different modes,different methods are used for the three modes to make the model learning of each single mode more effective.Experiments are designed and carried out on the data set DAIC-WOZ.Experimental results show that,compared with other models,the decision accuracy of the proposed model is about 1%higher than that of other models,which verifies the superiority of the proposed model.On the other hand,the multimodal joint information depression decision model is studied.Due to the relevance of speech,facial expression and text semantics in the expression of depression,this thesis uses the decision-making level fusion method to effectively fuse the trained single-mode depression judgment model for the final emotion analysis and classification decision,and outputs the multi-mode depression judgment results for each patient.Compared with other research results,the experimental results improve the recognition accuracy of depression by about 5% in male and female data,which has certain advantages and fully verifies the effectiveness of the model proposed in this thesis.The experimental results show that the multi-source information joint decision model designed in this thesis can judge whether the subject has depression,and the decision accuracy after information fusion is higher than that of a single network through speech features,image features or text features alone.The system can be used as an auxiliary means for the diagnosis of depression,which is efficient,convenient and practical. |