Font Size: a A A

Research On Multimodal Fusion Algorithm Based On Self-paced Learning

Posted on:2020-11-14Degree:MasterType:Thesis
Country:ChinaCandidate:N YuanFull Text:PDF
GTID:2428330590972687Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Human understanding of the outside world is usually based on a comprehensive response of multiple perceptions,such as sight,hearing,touch,and so on.Multi-modal data are the results of the same thing in different forms.People can deeply understand the essential structure of things by fusing the multi-modal data.In multi-modal fusion,traditional fusion models often ignore analyzing the impact of sample importance on fusion.In this thesis,the self-paced learning model is introduced to improve this problem.The self-paced learning model is similar to the human education process.The practice of the model is to sort the samples into easy-learning to hard-learning and train samples step by step.The research content of this thesis is multi-modal fusion algorithm based on self-paced learning.The main work and innovations are as follows:Firstly,a multi-modal fusion model based on low rank representation is proposed.Specifically,different modal data have the existence of shared information.By performing low-rank constraints on multi-modal data,we can extract latent main structure information between them to help each modality better classify the samples.Then,for different multi-modal data sets,we use L2,p norm to autonomously control the sparsity between them.Considering the non-convexity of L2,p norm,this thesis proposes a general framework to transform it into a convex problem and provides its rationality and convergence of the original function.At the same time,multi-modal data often has data missing issues.The self-paced learning model is used to analyze the importance of the samples to help describe the correlation between different modalities to further increase the robustness and generalization of the model.The experimental results on ADNI and multi-spectral palmprint data show that the proposed model can achieve higher classification accuracy in multi-modal data classification problems.In addition,the single-layer self-paced learning model may become unstable due to the lack of samples in early training stage.Therefore,a multi-layer self-paced learning multi-modal fusion method is proposed.The convergence result of one self-paced learning model is transmitted as prior knowledge to the next layer of model training,which can effectively improve the stability of the initial model.Moreover,when the multi-modal data contain a lot of noise,only focusing the easy-learning samples may make the generalization performance of the model poor.For the misclassified samples generated in each iteration,we will assign them a higher weight,which allows the model to pay more attention in the next iteration.We verified the the effectiveness of the proposed method by comparing it with several fusion models on UCI public data sets.
Keywords/Search Tags:multi-modal data, data extraction, low-rank representation, sparsity, norm, self-paced learning
PDF Full Text Request
Related items