Font Size: a A A

Heart Sound Signals Classification Based On Deep Learning

Posted on:2024-09-20Degree:MasterType:Thesis
Country:ChinaCandidate:Z M RenFull Text:PDF
GTID:2530307064985289Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
The sound of heartbeats can reflect the functioning of the heart valves,provide important information about the heart condition,and help doctors judge the health of the heart.Automatic heart sound diagnosis plays an important role in early detection of cardiovascular diseases.Cardiac auscultation is often used to assist doctors in clinical diagnosis because of its low cost and non-invasive advantages.However,cardiac auscultation requires doctors’ rich clinical experience and long-term professional skill training,which greatly increases the cost of early diagnosis of diseases.Therefore,automated auscultation of heart sounds has important research significance for auxiliary diagnosis of cardiovascular diseases.This paper conducts an in-depth study on heart sound signal classification using deep learning methods.The main work includes the following two aspects:(1)A CNN model integrating time and time-frequency featuresAiming at heart sound signal classification,a new CNN model integrating time and time-frequency features(TTFI-CNN)is proposed.In this model,1D CNN and Bi-LSTM are used to extract one-dimensional time-domain features from raw Phonocardiogram(PCG)signals and combined into a 1D CRNN module.And a 2D CNN module is constructed to extract high-level features from two-dimensional dynamic MFCC feature maps.Then channel attention is used to recalibrate and integrate the outputs of the two modules to select features with key information.The proposed model is tested on two public datasets.The experimental process includes data preprocessing,experimental environment construction and network training and parameter adjustment.The experimental results show that the proposed model achieves better results than other achievements.(2)Four heart sound signal classification models based on VITVision Transformer(ViT)is mainly based on self-attention mechanism and has stronger global information modeling ability than CNN.Therefore,this paper constructs four models based on ViT to apply self-attention for heart sound signal classification.These four models are: 1)2D ViT model for extracting two-dimensional time-frequency domain features;2)1D ViT model for extracting one-dimensional time-domain features;3)Parallel fusion model of 2D ViT and 1D ViT;4)Parallel fusion model of 2D ViT and 1D CRNN.The latter two models fuse one-dimensional temporal features and two-dimensional timefrequency domain features.The proposed models are tested on a public dataset PhysioNet/CinC 2016 and compared with TTFI-CNN model for analysis.The experimental results show that all four proposed models based on ViT have good performance for heart sound signal classification.Except for 1D ViT model,they all achieve more than 95%accuracy rate,among which parallel fusion model of 2D ViT and 1D CRNN achieves highest accuracy rate at 97.33%.In summary,the models proposed in this paper can effectively extract time-domain and time-frequency domain features on public datasets with good performance for heart sound signal classification which has important significance for auxiliary diagnosis of cardiovascular diseases.
Keywords/Search Tags:Heart Sound Signal Classification, Convolution Neural Network, Attention Mechanism, Deep Learning
PDF Full Text Request
Related items