Font Size: a A A

Intelligent Auxiliary Diagnosis For Glioma Using Multimodal MRI Images

Posted on:2023-02-08Degree:DoctorType:Dissertation
Country:ChinaCandidate:J H ChengFull Text:PDF
GTID:1524307070983199Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Glioma is the most common primary intracranial tumor in adults with high morbidity,recurrence,and mortality,which is considered to be one of the most difficult tumors to cure in neurosurgical treatment.Accurate preoperative quantitative assessment,grading,genotyping,and survival prediction of glioma are crucial for treatment decisions and prognosis prediction.With the rapid development of imaging examination technology,magnetic resonance imaging(MRI)has become a routine examination method for glioma patients.Traditional radiologists need to comprehensively analyze the phenotypic characteristics of multimodal MRI images,make subjective judgments based on personal clinical experience and professional knowledge,and give imaging diagnostic results,which are prone to diagnostic bias and misdiagnosis.Medical image analysis methods based on deep learning have become a key core technology in radiomics or radiogenomics,which can efficiently analyze and process multimodal MRI images to achieve a more accurate diagnosis.The application of deep learning methods for intelligent diagnosis of glioma can not only assist doctors to make more scientific and reasonable clinical decisions,but also provide patients with a fast and non-invasive auxiliary diagnosis method,reducing the risk and cost of head interventional examinations.Based on preoperative multimodal MRI images of glioma,this paper deeply studies the challenging problems existing in glioma segmentation,grading diagnosis,genotyping,and survival prediction,and provides targeted solutions.Innovative achievements made in this paper include:1)Given the characteristics of gliomas such as different sizes,highly variable tumor location,and strong heterogeneity,which make the automated segmentation methods face great difficulties,and the existing segmentation models have poor feature representation ability for 3D MRI data,this paper proposes an automatic segmentation algorithm for gliomas based on atrous convolution and residual attention mechanism,termed as RAAUNet.First,by adopting atrous convolutions with different dilation rates,the network has different sizes of receptive fields without introducing additional network parameters to capture contextual multi-scale feature information.Second,a residual attention mechanism is introduced into the skip connections between the encoding and decoding paths of the network to further enhance the feature representation ability for the identification of positive lesions.Finally,fully-connected conditional random fields are employed for structured prediction to further optimize tumor boundaries and eliminate isolated false-positive voxels.The experimental results on the Bra TS 2018 dataset show that the RAAU-Net achieves segmentation performance with a Dice value of 0.88 for the whole tumor,0.80 for the tumor core,and 0.72 for the enhancing tumor.Compared with the baseline network and other segmentation methods,RAAU-Net achieves better segmentation performance.2)To solve the problems of low robustness of existing glioma grading models and the difficulty of co-training the complementary information in multimodal MRI images under a unified model,the paper proposes a multimodal disentangled variational autoencoder(MMD-VAE)for glioma grading.First,the high-dimensional features for each MRI image are encoded by a variational autoencoder to extract latent high-order feature representations.Then,the multimodal MRI higher-order feature representations are decoupled using the proposed cross-modality reconstruction loss and common-distinctive loss to extract shared and complementary representations among multimodal MRIs.Finally,these shared and complementary representations are fused for glioma grading.To increase the interpretability of the grading model,this paper adopts the SHAP method to quantitatively explain and analyze the contribution of important features to the grading model.MMD-VAE model achieves a grading performance with an AUC value of 0.99 on the Bra TS 2019 dataset and 0.96 on the external validation dataset.When randomly sampling 25% of the training samples,MMD-VAE achieves AUC above 0.90 for grading on both datasets.The experimental results and visualization analysis show that the MMD-VAE model has high grading performance,strong robustness,and good interpretability.3)To address the problem that glioma segmentation and IDH genotyping are mostly based on single-task learning without considering the correlation between the two tasks,the paper proposes a multi-task learning-based method for automatic glioma segmentation and IDH genotyping,named as MTTU-Net.A hybrid CNN-Transformer encoder is designed to obtain shared contextual global feature representations between two tasks,and then the shared representations are used for simultaneously performing tumor segmentation and IDH genotyping in an end-to-end manner.To prevent the task bias problem during multi-task learning,a multi-task loss function based on uncertainty weighting is proposed,which can adaptively assign the weights for the two tasks to achieve the loss balance.Meanwhile,an uncertainty-aware pseudo-label selection(UPS)semi-supervised multitask learning framework is proposed to generate IDH pseudo-labels from a large number of unlabeled MRI data for improving the accuracy of IDH genotyping.On the independent validation dataset,the MTTU-Net model outperforms the single-task models,in which the Dice value in the whole tumor segmentation and the AUC value in IDH genotyping both reach 0.90.In addition,the IDH genotyping accuracy is improved from 0.86 to 0.90 by the UPS-based semi-supervised learning.The experimental results show that MTTU-Net is an effective multi-task learning method,which can simultaneously improve glioma segmentation accuracy and IDH genotyping performance.4)To figure out the high dependence and limitation of prior knowledge in traditional survival analysis models,an automatic glioma segmentation and survival prediction method based on multi-task learning(MSST-Net)is proposed.To revolutionize the traditional feature engineering research paradigm,the glioma segmentation task is used as an auxiliary task in the multi-task learning settings to ensure the high-level features from the tumor region are effectively extracted,and these high-level features are fused and used for survival analysis as the primary task,which is modeled in an end-to-end manner.To further improve the prediction performance of the survival model,a ranking loss is designed to make the network learn the survival differences between patients.The multi-task loss is optimized in an uncertainty-weighted manner,and adaptive weight adjustment is performed on each loss function to prevent task bias during the training.On the independent testing dataset,MSST-Net shows the same or even better segmentation performance than the single-task glioma segmentation model,and the C-index of the survival prediction task can reach 0.74,which is nearly 10% higher than that of the single-task survival model without the supervised segmentation branch.Meanwhile,Kaplan-Meier survival curve analysis shows that the prognosis of high-and low-risk groups predicted by the MSST-Net model is consistent with the prognosis of brain tumor malignancy and IDH mutation type,which further proves the effectiveness and reliability of survival prediction.In this study,several intelligent auxiliary diagnosis models of glioma based on multimodal MRI images are constructed based on the preoperative multimodal MRI image data to meet the actual clinical needs of glioma segmentation,grading,genotyping and survival analysis.These models will help to improve the diagnostic efficiency of doctors and provide guidance for the personalized diagnosis and treatment of glioma patients.
Keywords/Search Tags:Intelligent auxiliary diagnosis, Glioma segmentation, Glioma grading, IDH genotyping, Survival analysis, Multimodal MRI
PDF Full Text Request
Related items