Font Size: a A A

Deep Learning Based Automatic Segmentation Of Nasopharyngeal Carcinoma Organs At Risk Area

Posted on:2022-09-29Degree:MasterType:Thesis
Country:ChinaCandidate:W H LeiFull Text:PDF
GTID:2504306524978569Subject:Precision instruments and machinery
Abstract/Summary:PDF Full Text Request
Nasopharyngeal Carcinoma(NPC)is a leading form of Head-and-Neck(HAN)can-cer in the Arctic,China,Southeast Asia,and the Middle East/North Africa.Accurate segmentation of Organs-at-Risk(OAR)from Computed Tomography(CT)images with uncertainty information is critical for effective planning of radiation therapy for NPC treatment.Despite the state-of-the-art performance achieved by Convolutional Neural Networks(CNNs)for automatic segmentation of OARs,existing methods do not provide uncertainty estimation of the segmentation results for treatment planning,and their ac-curacy is still limited by several factors,including the low contrast of soft tissues in CT,highly imbalanced sizes of OARs and large inter-slice spacing.To address these problems,we propose a novel framework for accurate OAR segmentation with reliable uncertainty estimation.First,we propose a Segmental Linear Function(SLF)to transform the inten-sity of CT images to make multiple organs more distinguishable than existing methods based on a simple window width/level that often gives a better visibility of one organ while hiding the others.Second,to deal with the large inter-slice spacing,we introduce a novel 2.5D networks(named as 3D-SepNet)specially designed for dealing with clinic HaN CT scans with anisotropic spacing.Thirdly,existing hardness-aware loss function often deal with class-level hardness,but our proposed attention to hard voxels(ATH)uses a voxel-level hardness strategy,which is more suitable to dealing with some hard regions despite that its corresponding class may be easy.Last but not least,we use an ensemble of models trained with different loss functions and intensity transforms to obtain robust results,which also leads to segmentation uncertainty without extra efforts.Our method won the third place of the HaN OAR segmentation task in StructSeg 2019 challenge and it achieved weighted average Dice of 80.52% and 95% Hausdorff Distance of 3.043 mm.Experimental results show that 1)our SLF for intensity transform helps to improve the ac-curacy of OAR segmentation from CT images;2)With only 1/3 parameters of 3D UNet,our 3D-SepNet obtains better segmentation results for most OARs;3)The proposed hard voxel weighting strategy used for training effectively improves the segmentation accu-racy;4)The segmentation uncertainty obtained by our method has a high correlation to mis-segmentations,which has a potential to assist more informed decisions in clinic prac-tice.Although we achieve effective results based on the automatic segmentation models,they are still inapplicable in clinical situation.It’s manly because 1)the CT images own low contrast around soft tissues;2)the difference between patients;3)the sizes of organs like optical nerves and optical chaism are small.On the contrary,interactive segmenta-tion methods could combine the human expert knowledge with machine intelligence to improve the accuracy and efficiency of segmentation.Traditional Interactive segmentation methods,e.g.,Graph Cut and ITK-SNAP,are suitable to the segmentation of outsiders.However,they are mainly based on the low-level features thus need multi interactions to achieve satisfactory results,which will increase the burden of users.Based on these observations,this paper investigates the combination of CNN and user interaction for multi-organ segmentation,achieving higher segmentation accuracy within fewer time.
Keywords/Search Tags:Medical image segmentation, Interactive segmentation, Intensity transform, Convolutional Neural Network, Uncertainty
PDF Full Text Request
Related items