Font Size: a A A

Medical Image Segmentation Using A Limited Amount Of Labeled Data

Posted on:2024-12-10Degree:DoctorType:Dissertation
Country:ChinaCandidate:X D LuoFull Text:PDF
GTID:1520307373470904Subject:Mechanical engineering
Abstract/Summary:PDF Full Text Request
Medical image segmentation is a fundamental and critical task in computer-aided diagnosis systems,especially in applications such as image-guided surgical planning and radiotherapy planning.Deep learning with convolutional neural network(CNN)has recently made remarkable progress in many automated image segmentation tasks.However,fully supervised learning typically demands a substantial amount of finely annotated data to achieve satisfactory segmentation performance.The scarcity of large-scale high-quality annotated data poses a challenge to developing high-performance deep learning models,consequently restricting their clinical applicability.Therefore,it becomes crucial to develop efficient data annotation methods and model training strategies using limited annotated data.As an alternative,interactive segmentation methods can utilize limited annotated data and minimal user interaction to develop models for image annotating rapidly.In addition,weakly supervised learning has also shown its potential to develop high-performance models using sparse annotations(points,lines,bounding boxes,or image-level tags)and has achieved promising experimental results on multiple tasks.Furthermore,since there is an amount of unlabeled data in real clinical scenarios,semisupervised learning can be used to develop models combining limited labeled data and huge unlabeled data.Moreover,employing active learning to select a limited number of informative samples for annotation and model fine-tuning is also an effective means to address the high cost of data annotation.The main contributions and innovations of this dissertation are summarized as follows:1.To alleviate the challenges of low efficiency and high cost during the medical image segmentation annotation,this dissertation proposed a new method for generalizable interactive segmentation and annotation based on geodesic distance transformation.This method develops a novel interactive segmentation workflow combining deep learning algorithms from three aspects: user interaction pipeline,interaction encoding,and segmentation refinement.Firstly,considering the characteristics of medical images in shape and intensity distribution,this dissertation proposes using new interior margin points as user interactions to reduce the amount of user interaction and provide more shape information for different types of organs with complex and irregular shapes.Simultaneously,a new context-aware and parameter-free exponential geodesic distance transformation is proposed to encode user interaction.Finally,a novel information fusion followed by a graph cuts algorithm is proposed for segmentation refinement.Experiments on multi-center,multi-task segmentation datasets demonstrate that this method can achieve accurate image segmentation with minimal user interaction,where compared with the widely-used annotation tool ITK-SNAP,the average accuracy of annotating a 3D image is improved by about 6% and the annotation time is reduced by nearly five times.Furthermore,this method can be applied to interactive annotation of unseen modalities or target categories without the need for retraining or fine-tuning the model.2.In response to the problem of limited supervision signals due to sparse annotations,this dissertation designed a scribble-supervised medical image segmentation method via a dual-branch network and dynamically mixed pseudo labels supervision.This method proposes a novel weakly supervised segmentation algorithm that facilitates deep learning methods to learn from scribble annotations and achieve accurate segmentation results.Specifically,this study first introduces a dual-branch network for generating diverse predictions to overcome the inherent drawback of difficulty in updating pseudo labels.Subsequently,this study further proposed a dynamic mixing pseudo-label method via dynamic combining the dual-branch network predictions to generate high-quality pseudo-labels for model training.The method achieves promising segmentation results on the cardiac structure segmentation and abdominal multi-organ segmentation datasets where compared with learning from annotated pixels this method improves segmentation accuracy by 19.54%and 11.33%,respectively.These encouraging results demonstrated the proposed method can enhance the learning capability of deep learning models from scribble annotations.3.To alleviate the challenges of labeled data are limited and unlabeled data is huge which widely exists in real clinical scenarios,this dissertation presented a semi-supervised medical image segmentation via dual-task consistency.This method leverages different representations of image segmentation tasks to develop a dual-task segmentation network.Then,two different tasks are mapped to the same representation space via a differentiable task transformation function.Afterwards,a cross-task consistency constraint is constructed to enforce similarity between the predicted results of the two tasks.Experiments on left atrium segmentation and pancreas segmentation datasets showed that the proposed method achieves comparable segmentation results to fully annotated data with only about20% of the labeled data,suggesting that the proposed method can encourage the model to learn from unlabeled data through dual-task consistency constraints.4.To tackle the issue of weak model generalization in multi-center clinical applications,this dissertation proposed an active learning based source free domain adaptation method and conducted multi-center clinical assessment.This method alleviates the reliance of existing deep learning methods on distribution-consistent data and the neglect of differences in image information by first proposing a novel unsupervised active domain adaptation scenario.Furthermore,this dissertation utilizes active learning methods and a pre-trained model on the source domain to select the most informative samples from the target domain for annotation and model fine-tuning.To validate the effectiveness of this approach,multi-center and multi-rater datasets for nasopharyngeal carcinoma primary tumor volume delineation are constructed.Experimental results demonstrate that this method achieves accurate cross-domain segmentation with limited annotated data,with segmentation performance comparable to fully annotated data achieved by annotating only about 20% of the data actively selected.Furthermore,multi-center clinical validation shows that the predicted results of this method require minimal doctor correction for clinical use,with an average revision degree not exceeding 9% and revision time less than two minutes,resulting in nearly five times higher segmentation efficiency per patient compared to manual delineation.This work focuses on the huge challenges posed by the lack of labeled data and the diverse forms of annotations to the development of high-performance medical image segmentation models.It conducts research on the problem of medical image segmentation based on limited labeled data and develops a series of high-efficiency data labeling methods and model training strategies based on deep learning to alleviate the problem of medical image segmentation models’ reliance on large-scale labeled data.
Keywords/Search Tags:Medical Image Segmentation, Interactive Segmentation, Weakly-Supervised Learning, Semi-Supervised Learning, Active Learning
PDF Full Text Request
Related items