Font Size: a A A

Research On Multimodal Medical Image Analysis Based On Deep Transfer Learning

Posted on:2021-05-16Degree:MasterType:Thesis
Country:ChinaCandidate:T LingFull Text:PDF
GTID:2404330647458917Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
The medical image itself contains rich anatomical structure and related pathological information.Medical image analysis has great significance on the diagnosis and treatment of diseases.Traditional medical image analysis methods rely on artificially designed effective features for training,and deep learning models can automatically learn the optimal features,which is significantly better than traditional learning methods.However,the training of deep learning models requires a large number of manually labeled samples,which are limited in many practical applications.Therefore,the thesis follows the study of multimodal medical image analysis based on deep transfer learning,takes prostate image segmentation as examples,and designs three novel algorithms to segment prostate images with a few labels or even no labels.The details of our research work can be summarized as follows:1.A novel algorithm of multimodal U-net based CT image prostate segmentation is proposed.Aiming at the multimodal medical image segmentation task,we make full use of the complementary role of MR modality to CT modality to design a new multimodal U-shaped network segmentation model MM-unet and a multimodal loss function MM-Loss.Firstly,the initial segmentation model of MRI and CT images is trained respectively by using model transfer,and then MM-unet based on MRI and CT images is trained jointly.Finally,the segmentation model is used to segment CT image for performance improvement.2.A novel algorithm of cross-modal class distribution alignment for adversarial domain adaptation is proposed.Aiming at the unsupervised domain adaptive classification task,we first apply Cycle GAN network structure as the base model of image translation between the source and target modality.Then,we design a discriminative structure-preserving loss,a conditional adversarial generation loss,and a classification consistence constraint loss to align the class distribution of the two modalities.Finally,the learning model trained by source images can be transferred for target images.3.A novel algorithm of cross-modal adaptive image segmentation based on context feature alignment is proposed.Aiming at unsupervised domain adaption for segmentation task,we design a novel feature learning and transfer model.First,a feature learning network is constructed to extract and align the context features between the source and target modalities,and then build a shared segmentation network among the two modalities,by minimizing the entropy of the target segmentation prediction for high confidence.Finally,we measure and align the segmentation results between the two modalities to further match the feature distributions and segmentation model of the two modalities.
Keywords/Search Tags:Deep transfer learning, Unsupervised domain adaptation, Multimodality, Medical image segmentation
PDF Full Text Request
Related items