Font Size: a A A

Brain-Machine Collaborative Intelligence For Facial Expression Recognition

Posted on:2024-04-04Degree:MasterType:Thesis
Country:ChinaCandidate:D J LiuFull Text:PDF
GTID:2558307103469514Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Neural network models of machine learning have shown promising prospects for facial expression recognition(FER).Machines can learn simple visual features from visual images.However,the generalization of the model trained from a dataset with a few samples is limited.Unlike the machine,the human brain can effectively realize the required information from a few samples to compensate to a certain extent for the lack of visual features in the understanding of expressions.Therefore,combining EEG features and visual features with each other can improve the efficiency of brain-machine collaboration.At present,there is a lack of effective technical ways to achieve intelligent transfer between brain and machine,and how to realize brain-like intelligent feature extraction of the machine to accomplish the task of facial expression recognition is in urgent need of in-depth research and key breakthroughs.(1)This thesis proposes a Brain-Machine Feature Mapping(BMFM)method.It realizes brain-like intelligent feature extraction of the machine.Since the Electroencephalogram(EEG)signals can reflect brain activity,the cognitive process of the brain is decoded by a model following reverse engineering.The random forest(RF)is trained to obtain the brain-machine feature mapping relationship from image visual features to EEG cognitive features.It is used to generate brain-like intelligent features and fuse the generated brain-like intelligent features with image visual features to complete the facial expression recognition task.(2)This thesis proposes a Brain-Machine Coupled Learning(BMCL)method.It solves the problem that BMFM ignores the different mechanisms between the human brain and the machine.BMCL utilizes visual images and EEG signals to couple train the models in the visual and cognitive domains.Each domain model consists of two types of interactive channels,common and private which are used to learn the common and private knowledge of two domains respectively.The concatenation of both channels in the visual domain is used to complete the facial expression recognition task.It does implement the information interaction between the two modalities very well.(3)This thesis proposes a Brain-Machine Generative Adversarial Network(BM-GAN)framework.It solves the problem that BMFM ignores the complex cognitive processes of the brain.BM-GAN utilizes the cognitive knowledge learned from EEG signals to guide a Convolutional Neural Network(CNN)to generate brain-like intelligence features which are used to complete the facial expression recognition task.It does enable brain-machine transfer across heterogeneous intelligences well,and introduce sufficient cognitive knowledge.After the learning is completed,the proposed method in this paper implements the facial expression recognition task without the involvement of EEG signals.Experiments demonstrate that the proposed methods can produce excellent performance for FER without the participation of the EEG signals after learning.
Keywords/Search Tags:Brain-machine collaborative intelligence, transfer learning, multimodal learning, generative adversarial network, EEG signals, facial expression recognition
PDF Full Text Request
Related items