Automatic segmentation of brain tumors from medical images is important for growth assessment and clinic decision of the tumors.Its high performance often requires multi-modal or contrast-enhanced images.For glioma segmentation,fluid-attenuated inversion recovery(FLAIR)is High-Contrast(HC)modality for whole tumor segmentation,while T1-weighted,contrast-enhanced T1-weighted,T2-weighted are Low-Contrast(LC)modality.However,obtaining multi-modal is expensive and time consuming,resulting in high-contrast modality missing.For Vestibular Schwannoma segmentation,enhanced T1-weighted(T1c)is HC modality,while T2-weighted is LC modality.However,due to the impact of T1 c contrast agent on patient safety,missing modalities also occurs.To improve patient safety and save time while allowing accurate assessment of the tumor,this thesis proposes a novel two-stage multi-task framework based on adversarial learning and consistency regularization to synthesize HC MRI for brain tumor segmentation when only LC images or a subset of modalities are available.(1)This thesis uses a multi-task generator to simultaneously obtain a synthesized HC image and a coarse segmentation.To generate segmentation-friendly HC image,this thesis proposes a tumor-focused adversarial loss and a tumor perceptibility loss to minimize the high-level semantic domain gap between synthesized and real HC images.The two tasks can benefit each other by overcoming overfit and boost the performance.(2)This thesis proposes a multi-task fine segmentation network that takes the synthesized HC image and coarse segmentation as input,and predicts the final segmentation and error in the coarse segmentation simultaneously,where a consistency between these two predictions is introduced for better segmentation performance.(3)A novel end-to-end framework for high-contrast image synthesis and accurate brain tumor segmentation and joint training strategy is introduced,where the synthesis and segmentation are learned synergistically in a multi-task learning pipeline.(4)The framework was validated with two applications: synthesizing FLAIR images from T1,T2 and contrast-enhanced T1 images for whole glioma segmentation,and synthesizing contrast-enhanced T1 images from regular T2-images for Vestibular Schwannoma segmentation.Compared with whole glioma segmentation of low contrast image,my framework improved the average Dice from 84.54% to 87.55% and improvement was significant based on a paired ttest.When compared with Vestibular Schwannoma segmentation of low contrast image,my framework improved the average Dice from 86.00% to 89.46% in 0.4mm resolution,from 79.95% to 82.44% in 1.0mm resolution,and improvement was also significant based on a paired t-test.The results showed that my method largely improved the segmentation accuracy compared with direct segmentation from the original partial modalities or low-contrast images,and it outperformed state-of-the-art image synthesis methods for segmentation. |