Font Size: a A A

Unified Multi-modal Segmentation For Medical Images Based On Generative Adversarial Networks

Posted on:2022-06-30Degree:MasterType:Thesis
Country:ChinaCandidate:W G YuanFull Text:PDF
GTID:2480306569481034Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Medical image segmentation is an important part of computer-aided diagnosis,which helps doctors locate targets and achieve quantitative evaluation before surgery.To fully understand the patient's condition,doctors often use a variety of imaging techniques to examine the patient,and generate multi-modal data.Recently,deep neural networks for multi-modal inputs have made breakthroughs in medical image segmentation.However,the collection of multi-modal data is time-consuming and laborious.In clinical practice,it is common that one or more modalities are missing.A specific network for a single modality increases the burden on doctors and researchers to choose the best model.To get rid of the constraints of multi-modal input and allow doctors and researchers to devote more energy to disease diagnosis itself rather than selecting models,this research focuses on the unified multi-modal segmentation problem,that is,training a model that is common to multi-modal data.Unified multi-modal segmentation unites multiple datasets of different modalities,alleviating the problem of data scarcity.Furthermore,multi-modal joint training is conducive to fully discovering the commonalities between different modalities and improving the performance of the model in each modality.Combined with the current modality translation based on generative adversarial learning,this work proposes a multi-task unified multi-modal segmentation framework that combines modality translation and image segmentation.Its main innovations include:(1)A unified multi-modal segmentation model UTMS combined with modal translation is proposed.Considering that the shape and size of the segmentation target should be consistent before and after modality translation,we integrated the segmentation task into the multi-modal image translation task.On the one hand,the modality translation task enriches the information of extracted feature,and provides implicit data enhancement for the segmentation task.On the other hand,the segmentation task provides the contour information of the target for the translation task.Experiments confirmed the effectiveness of the UTMS model.(2)A semi-supervised unified multi-modal model sUTMS combined with translation consistency is proposed.Due to the shortage of medical resources and the boring labeling process,there are more unlabeled data in clinical practice.How to mine the value of unlabeled data through a small amount of labeled data has very important practical significance.Based on the assumption that the segmentation results before and after modality translation should be consistent,we treat the segmentation results of the source modality data as pseudo labels,and force the segmentation results after modality translation to be consistent with those before translation.Experiments prove that the semi-supervised loss function based on translation consistency significantly improves the performance of the original model.
Keywords/Search Tags:Multi-modal Learning, Unified Model, Medical Image Segmentation, Generative Adversarial Networks
PDF Full Text Request
Related items