Font Size: a A A

Research On Intelligent Delineation Of Organs-at-Risk And Tumor For Cancer Radiotherapy And Image Fusion

Posted on:2020-06-12Degree:DoctorType:Dissertation
Country:ChinaCandidate:J X TianFull Text:PDF
GTID:1364330620954214Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Precise radiotherapy requires multi-modal imaging techniques such as CT/MRI/PET to guide high-energy radiation energy(dose)to converge in the tumor area and protect the surrounding normal organs to the maximum extent.The aim of image guided radiotherapy is to improve the control rate of tumors,reduce the toxic effects associated with radiotherapy,prolong the survival time of patients and improve the quality of life.Radiotherapy includes image acquisition,image fusion,delineation of target areas and organs at risk,planning,validation et al.The high-precision delineation of the gross target volumes(GTV)and organs at risk(OAR)is the premise and one of key techniques of the successful implementation of image-guided adaptive radiotherapy.At the same time,the fusion of CT,MRI and PET images can provide complementary anatomical structure and functional activity information for clinical practice,and help to improve the accuracy of target and OARs delineation.At present,clinical radiotherapists manually delineate the tumor target area and OARs on the 2D cross-section slice by slice.Manual delineation is not only time-consuming and laborious,but also depends on the doctors clinical knowledge and experience.The results of different doctors lack consistency.Meanwhile,the accuracy of automatic delineation can't meet the requirements of clinical radiotherapy.Aiming at these problems,the key technologies of gross tumor volume automatic delineation,organs-at-risk automatic delineation and image fusion are studied in this paper,and achieved the following research results:(1)Review on deep learning methods for medical image analysis.Deep learning algorithms,such as convolutional neural networks(CNN),can automatically extract the hidden disease diagnosis features from the medical image data,and are used to analyze medical images now.We review most of deep learning methods for medical image analysis.Firstly,the characteristics of medical image analysis are introduced briefly.Then,we analyze the principles of deep learning,highlight the popular CNN and summarize the frameworks of image classification and segmentation.Thirdly,the current state-of-the-art of medical image analysis methods based on deep learning are reviewed and commented.Finally,it has discussed the challenges and practicable strategies in deep learning for medical image analysis.Open research is also discussed.(2)Automatic delineation of OARs in head and neck CT images.To minimize post-treatment complications and reduce the risk of secondary malignant tumors caused by radiation,organs-at-risks,such as brainstem,mandible,parotid,submandibular glands,optic nerves,and chiasm,must be accurately delineated.To improve the delineation accuracy of small organs,an automatic deep-learning-based method is proposed for head and neck OARs segmentation.A modified V-Net structure is constructed to extract deep and shallow features of OARs by specialized end-to-end supervised learning.To address the extremely class imbalances of small organs,a positional prior knowledge restricted sampling strategy is proposed,and Dice loss function is used to train the network.The performance of the proposed method was validated on PDDCA dataset,which used in Head and Neck Auto-Segmentation Challenge of MICCAI 2015.The mean Dice coefficients of each organ are 0.945 of mandible,0.884 of left parotid gland,0.882 of right parotid gland,0.863 of brainstem,0.825 of left mandibular gland,0.842 of right mandibular gland,0.807 of left optic nerve,0.847 of right optic nerve and 0.583 of optic chiasm.The 95%Hausdorff distances of mandible,parotid glands,brainstem and submandibular glands are all within 3 mm The contour mean distances of all organs are less than 1.2 mm.(3)Delineation of OARs in thoracic CT images.Thoracic cancers,including lung cancer,breast cancer,and esophageal cancer,are the most prevalent cancers in China.Aiming at the problem that the accuracy of automatic delineation of slender organs-at-risk can't meet the clinical requirements of treatment planning in thoracic cancer radiotherapy,a step-by-step method is proposed to delineate lungs,heart,spinal cord and esophagus,which combines the anatomical prior knowledge and deep dilated convolutional neural network.Firstly,a deep dilated convolutional neural network(DCNN)is constructed to effectively delineate both lungs.Then the prior knowledge of anatomical location relative to lungs is adopted to locate other organs.Finally,the located regions are used to train three DCNNs for the delineation of the heart,spinal cord and esophagus respectively.The proposed method is validated on LCTSC dataset of 2017 AAPM Thoracic Auto-segmentation Challenge.The mean Dice coefficients of each organ are 0.966 of left lung,0.969 of right lung,0.930 of heart,0.900 of spinal cord and 0.774 of esophagus.The experiments demonstrate that the proposed method improves the delineation accuracy of esophagus and spinal cord.In addition,by taking the clinician's delineation as reference standard,the models are also tested on thoracic CT images of 8 patients with lung cancer.The mean Dice of left lung,right lung,heart,spinal cord and esophagus are 0.950?0.964?0.869?0.862 and 0.772,respectively.It is further verified the effectiveness of the proposed method.(4)Delineation of GTV.An automatic method based on deep 3D CNN and PET/CT images is proposed for gross tumor volume delineation of nasopharyngeal tumor radiotherapy.To investigate the influence of CNN structures to the delineation performance,two frameworks on deep fully convolutional networks(FCN)are designed.The first one uses two-pathway FCN networks to integrate local and broader context information,named as 2PW-FCN.The second used residual network and can fuse multi-scale context information for GTV segmentation,named as DeepSeg.To validate the performance of the proposed networks,48 PET/CT scans and their corresponding GTVs labeled by clinical radiation oncologists of patients with nasopharyngeal cancers are exported from a clinical radiotherapy treatment planning system software via DICOM files.39 PET/CT scans and their corresponding GTVs labeled by clinical radiation oncologists of patients with nasopharyngeal cancers are used to train the models.The converged models were tested on 9 PET/CT images of patients with nasopharyngeal cancers.The experimental results demonstrate that DeepSeg method obtains mean DSC 0.8205±0.0745 and achieves about 0.0612 higher DSC than the 2PW-FCN method.(5)Multimodal medical image fusion.High-preeision adaptive Intelligent Intensity Modulated Radiotherapy requires not only high spatial resolution CT and MRI imaging systems to provide accurate location information of target areas and organs at risk for cancer radiotherapy,but also PET imaging systems to provide information on biological characteristics such as tumor metabolism,proliferation,hypoxia and radio sensitivity(anti-radiation)characteristics.Previous studies have found that edge and texture features play an important role in clinical diagnosis and target delineation.It is essential to preserve edge and texture features in fused images.A fusion algorithm based on non-subsampled shearlet transform(NSST)and pulse coupled neural network(PCNN)is proposed.PCNN is adopted to capture edges and detail information adaptively in the high-frequ ency directional sub-band of NSST.And activated times of PCNN neurons are used to present the possibility that the corresponding pixels are edge and detail information.The fusion rules are formulated to preserve the edge and texture features:A weighted method based on the local regional energy is adopted to fuse the low-frequency sub-band coefficients,and PCNN model is used to discriminate edge details in high-frequency coefficients.Experiments performed on MRI/PET and CT/PET image sets demonstrate that the proposed method has good performance in term of both subjective visual performance and objective indices.The proposed method can well preserve the edge and textures of the source images and achieve good visual performance.
Keywords/Search Tags:deep learning, convolutional neural networks, image segmentation, delineation of gross tumor volume for radiotherapy, delineation of organs-at-risk for radiotherapy, image fusion
PDF Full Text Request
Related items