Font Size: a A A

Research On Deep Learning-based Multivariate-temporal Representation Methods For Biomedical Signals

Posted on:2020-06-29Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y YuanFull Text:PDF
GTID:1364330623456657Subject:Electronic Science and Technology
Abstract/Summary:PDF Full Text Request
With the recent advances in pervasive sensing technologies and information technology applications for healthcare,biomedical signals,such as multi-modality polysomnography and multi-channel electroencephalogram,can be simultaneously recorded by multiple sensors to provide abundant physiological information and reflect patient's health condition in different aspects.How to efficiently learn feature representations from such multivariate-temporal waveform data has become a key enabler for clinical diagnosis and attracted increasing research interest in biomedical informatics.Towards this end,several researchers have attempted to apply deep learning to capture representative information in biosignals.However,existing deep learning models ignore to incorporate the multivariate-temporal characteristics of biosignals.The main challenge is how to fully combine the uniqueness of biosignals with deep learning,including waveform temporal correlations,multivariate prior information,interdependent feature patterns,and heterogeneous clinical manifestation.This dissertation carries out research on deep learning-based multivariate-temporal representation methods for biosignals,aiming to provide novel methodologies for end-to-end deep representation learning of biosignals and practical technologies for doctors to diagnose diseases,in order to promote the development and application of deep learning in healthcare.The main research contents are as follows:1.A multi-context deep semantic learning method is proposed.To handle the challenge that most unsupervised methods ignore the temporal correlations between waveform segments of biosignals,the proposed method adopts semantic learning to construct a static context-based waveform word encoding network using stacked autoencoders,and a dynamic context-based waveform embedding network using Skip-gram.The unified objective function is designed by sharing the feature representation of both networks,in order to train an end-to-end multi-context deep semantic network.The experimental results tested on two biosignal datasets show that the proposed method is superior to other unsupervised feature learning methods,and improves the autoencoding capability of multi-level temporal features for high-resolution waveform biosignals.2.A multi-view deep learning framework is proposed.Aiming at the challenge that most deep learning methods fail to explicitly incorporate the multivariate prior information of biosignals to learn different aspects of physiological features,and the irrelevant and redundant information in the multivariate data may interfere the end-to-end training,the proposed method extracts inter-and intra-variate features of biomedical signals by modifying the network architecture and train it from both unsupervised reconstruction and supervised classification errors.According to the stimulation-response mode,a sparse penalty term based on mutual competition is designed to guide the network focus on important and relevant information during the training.The experimental results show that compared with baselines,the proposed methods can better help model capture useful information in different aspects,and significantly enhance the quality of multivariate representations of biosignals.3.A hybrid attentive deep representation learning method for vector feature fusion is proposed.To address the challenge that existing deep learning models lack a mechanism to fuse features of biosignals according to the interdependencies among multivariate and temporal patterns,the proposed method presents a fusional attention mechanism combined with the aforementioned multi-feature extraction strategies.By learning a global-local fusional rate,the multivariate and temporal information can be dynamically integrated.The normalized contribution score vectors in the variate-and time-dimension are further derived to measure their pattern interdependencies,and fuse feature vectors by weighted aggregation,respectively.The experimental results show that the proposed method outperforms baselines in both multi-modal and multi-channel biosignal datasets,demonstrating its fusion capability for biosignals.4.A two-dimensional attention network for deep matrix fusion representation learning is proposed.Regarding the challenge of heterogeneous feature patterns caused by individual differences in clinical,the proposed method mimics the practical inspection that doctor pays attention to the details jointly from both time and variate dimensions.The method assigns two-dimensional attention energy to the matrix feature sequence,and derive a normalized contribution score matrix using a hybrid focus procedure,in order to jointly fuse features in the matrix space.The experimental results show that the proposed method can fuse matrix features by distinguishing the importance of different variates over time,and hence improve the generalization and clinical practicability of the multivariate-temporal fusion features of biosignals.
Keywords/Search Tags:Biomedical signals, deep learning, multivariate time series analysis, representation learning
PDF Full Text Request
Related items