Font Size: a A A

Feature Extraction And Selection From Multi-Modality Data Based On Deep Learning

Posted on:2017-04-27Degree:MasterType:Thesis
Country:ChinaCandidate:L ZhaoFull Text:PDF
GTID:2348330515464192Subject:Computer technology
Abstract/Summary:PDF Full Text Request
With the development of Internet and technology,a great amount of data with different structures and representations are widely used in people's daily life and scientific researches.Most of these data are non-structural data with different structures.The data is always high dimensional and the representation varies with different modalities.What's more,there may be much redundant or noisy information among these modalities.The multi-modal data can not be applied to many conventional machine learning algorithms directly.It's necessary of extracting new feature from multiple modalities and evaluating these modalities in order to filter the redundance and noise.This paper proposed a series of algorithms for Multi-Modality data feature extraction and selection based on Deep Learning.Many conventional methods can not be applied to multi-modal data because of the heterogeneity among different modalities.This paper presents a novel deep multi-modal network and introduces deep learning into multimodality feature extraction.Every independent modality is allocated a unique sub-network and transformed into a new unified modality.With the shared layer on the top of the subnetworks,the latent connection is established between modalities.Finally,a shared fuse multi-modal feature is extracted.On the other hand,different modalities has different relevance with current tasks.To evaluate this relevance of different modalities,we propose a algorithm of feature extraction and selection via deep learning and sparse representation method.First,the multi-modal deep networks convert heterogeneous modalities into a unified homogeneous representation.Then we exploit structural sparsity method to obtain the importance weights of different modalities.The most important features are selected and the redundance and noise are filtered.Series of experiments show that the two proposed algorithms could extract multimodal shared fusion feature effectively and the framework can evaluated every modality correctly and filter out the redundant and noisy information exist in multi-modal data.
Keywords/Search Tags:Multi-Modality Data, Feature Extraction, Feature Learning, Deep Learning
PDF Full Text Request
Related items