Font Size: a A A

Pretraining Method Of Medical Image Model Based On Self-Supervised Contrast Learning

Posted on:2023-10-01Degree:MasterType:Thesis
Country:ChinaCandidate:S F LiuFull Text:PDF
GTID:2530306806956079Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Automated medical image processing technology has made significant progress after the emergence of deep learning based on the convolutional neural network.Under mainstream supervised learning,neural network training relies on large-scale marker data.Due to the need for a large amount of labor and professional knowledge,as well as the limitation of various specific application scenarios in medicine,the cost of medical image labeling is higher.Therefore,it is difficult to improve the performance of the network model by increasing the amount of marker data.At the same time,many medical image data sets contain many unlabeled original images,and the network model cannot use these redundant images.The self-supervised learning method learns feature representation from unlabeled data sets in the pre-training stage to improve the accuracy of downstream tasks.Self-supervised learning has great potential in solving this problem,which can improve the performance of medical image models without increasing the amount of labeled data and making full use of redundant unlabeled data.This paper focuses on the self-supervised pre-training method based on contrastive learning and its application in the network model.The main research contents and contributions are as follows:1)A self-supervised pre-training method based on contrast learning is proposed to improve the performance of medical image models.The feature extraction method based on the convolutional neural network is combined with the training paradigm of contrast learning to form a self-supervised learning pre-training method,which is used to learn prior knowledge of image data.In the pre-training stage,the potential feature representation is learned from unlabeled medical images.The contrast loss function learns the relationship between data mappings in the high-dimensional vector space.The method relies on two neural networks,called the online network and the target network,with inputs of different enhanced views of the same image,allowing the online network to predict the outcome of the target network’s output.Meanwhile,the weight parameters of the online network are used to update the target network slowly.In the downstream fine-tuning task,the network model loads the parameters of the online network and trains the network model again using marker data.Experimental results show that the proposed method improves the performance of the model without increasing the amount of existing data and verifies the effectiveness of self-supervised learning pre-training.2)A self-supervised pre-training method based on contrast learning is applied to medical image segmentation.A medical image segmentation network based on attention structure is proposed.The network deduces the attention map in spatial and channel dimensions through the attention block and then multiplies the attention map with the input feature map.In addition,adaptive feature refinement is implemented to highlight the significant features useful to the segmentation target.The proposed selfsupervised learning method is combined with the network model.The experimental results show that the attention structure can improve the feature extraction ability of the network and improve the accuracy of image segmentation.After the pre-training of selfsupervised contrast learning,the performance of the network model is improved.
Keywords/Search Tags:Contrastive Learning, Self-supervised Learning, Pre-training method, Attention mechanism, Medical image processing
PDF Full Text Request
Related items