Font Size: a A A

Application Of Self-supervised Learning In Echocardiographic Left Ventricle Segmentation

Posted on:2022-11-28Degree:MasterType:Thesis
Country:ChinaCandidate:S LiFull Text:PDF
GTID:2504306779995429Subject:Computer Software and Application of Computer
Abstract/Summary:PDF Full Text Request
Multi-view echocardiographic sequences have been the test of choice for the diagnosis of congenital heart disease because of their non-invasive,radiation-free,real-time imaging capabilities,and low cost.In echocardiography,important physiological parameters such as ventricular atrial volume and ejection fraction can be further measured by accurate segmentation of the left heart including left ventricle and left atrium region of the heart,thereby accurately evaluating cardiac function.Analysis of images is a very time-consuming and expensive task,and usually this kind of work requires some experienced professionals,which often leads to long waiting times for patients,and may even miss the best treatment time,so find a solution.A method that can greatly shorten the working time of early image analysis is particularly important.The manual segmentation method requires the user to outline the region of interest manually.Marking the position or contour of the left ventricular manually is tedious and timeconsuming,and there are subjective differences among different observers.The automatic segmentation method is superior to the manual segmentation method.At present,the segmentation on cardiac ultrasound images mainly has the following difficulties:(1)The existence of a lot of noise and artifacts in echocardiographic images will lead to loss of image edges,and at the same time,ultrasound images have low signal-to-noise ratio and low contrast.The resolution is low,which leads to the loss of local information and incomplete anatomical structure in the ultrasound image;(2)the same organ presents different anatomical structures in different views,and the structure of the left ventricle is different in different views.Second,the left ventricle is in the Different anatomical structures are presented under different viewing angles;(3)Under different protocols and the settings of different suppliers and centers,there are large differences in the gray value distribution and spatial texture of ultrasound images,which will further increase the number of suppliers(4)In the research of medical images,the biggest difficulty is the amount of data.Due to some limitations,there are great limitations in data acquisition and labeling.According to the survey,we found that the current research is mainly designed for the first three difficulties,mainly some fully supervised learning frameworks.Although many models can solve the first three difficulties well,for the problem of small amount of labeling,it has been There is no good solution yet.Therefore,we propose a method combining self-supervised learning and superpixel segmentation,which is one of the potential solutions to perform the above challenges by deeply mining raw data information.To address this issue,we propose a novel self-supervised learning method incorporating a superpixel framework for medical images to eliminate the need for annotations during training.The proposed framework obtains three self-supervised learning methods for pretense tasks,acquiring global and local information from unlabeled images.Then,we transfer the learned network to our downstream model.We demonstrate the general applicability of the proposed method to medical images using a variety of different tasks.We used 10,000 low-resolution and 200normal-resolution unlabeled cardiac echocardiogram sequences in our model,each containing about 30 ultrasound images.Firstly,we performe manipulation of Super Pixel on the raw ultrasound images,in the process we not only differentiated the left ventricle,but also identifying subanatomical structures in the background.This helps to identify these interference information in subsequent tasks,and further improves the accuracy and stability of the model for the segmentation of the left ventricle in echocardiography.Secondly,we employed three pretext tasks,including localize patch,determine Rotation and colorize.These three tasks are fully trained in the three dimensions of position,angle and color to ensure that the model can generate the deep-level knowledge of unlabeled data.In this process,the model not only acquires the information of the left ventricle in the image,but also fully learns the sub-anatomical structure information in the image with the help of the superpixel task.Then,for the model to fully learn various types of image information,we integrate the self-generated knowledge of the model in the three pretext tasks.Finally,we transfer the parameters from the pretext tasks to our downstream task,and only need to fine-tune the downstream model to get the desired segmentation effect.Using the model in this paper to train,validate and test this batch of data sets,the segmentation effect is close to that of the fully supervised learning method,and at the same time,the experimental conditions are unchanged.In comparison,such as U-net,ACNN,BYOL and other methods,the results of our model are better than other methods in indicators such as Dice,MIo U,MAD(mm),and HD(mm),and in consistency analysis and correlation analysis Better results were also obtained.Our results show that for medical image segmentation,the proposed method outperforms traditional methods that require manual annotations for training.
Keywords/Search Tags:deep learning, self-supervised learning, superpixel segmentation
PDF Full Text Request
Related items