Font Size: a A A

Multimodal Medical Image Fusion Algorithm And Its Application In Radiotherapy

Posted on:2022-01-01Degree:DoctorType:Dissertation
Country:ChinaCandidate:Q CaoFull Text:PDF
GTID:1484306740963089Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Radiotherapy is a kind of treatment using high-energy radiation to kill the malignant tumor.Its purpose is to eliminate or reduce tumor cells and protect normal tissues.The development of modern radiotherapy technology is inseparable from medical imaging equipment.Its application runs through the links of radiotherapy decision-making,simulation positioning,planning,and implementation.It provides multi-level,multi-angle,multi-time image data for clinical practice,and ensures that the tumor can be seen clearly and accurately in the process of radiotherapy.Due to the difference of imaging principle and application environment,there are lots of redundant and complementary information between different modes of medical image data.Image fusion technology can integrate image data of different modes organically,use complementary information to improve image clarity and use redundant information to improve image reliability.In recent years,although the research on medical image fusion algorithms has made a lot of progress,there are still many problems worthy of exploration and research.In this paper,the multimodal medical image fusion method and its application are studied from the perspective of assisting clinical decision-making and guiding accurate treatment.The full text includes the following aspects.(1)Prediction of acute radiation-induced brain injury in GBM by multi-sequence MR image fusionAcute radiation-induced brain injury is a common side effect in patients with glioblastoma after radiotherapy.To predict the risk of radiation-induced brain injury before treatment and reduce or avoid the possible injury of patients,this paper proposes a multi-sequence MR image fusion algorithm based on GPU accelerated Nonsubsampled shear wave transform and two-dimensional principal component analysis,and constructs an image omics prediction model on the fusion image.Firstly,the algorithm decomposes T1 enhanced and T2 weighted MR images by GPU accelerated Nonsubsampled shear wave transform.Secondly,it adaptively blocks the high-frequency subbands by combining global and regional methods,generates the fusion parameters of high-frequency subbands by combining the two-dimensional principal component analysis method,and adopts the weighted average fusion strategy for low-frequency subbands.Finally,it generates the fusion image by inverse transform.Finally,the radiomics parameters in the fusion image are extracted,and the prediction model is constructed by correlation analysis and logistic regression.The experimental results show that the proposed fusion algorithm achieves better image quality than the traditional methods;at the same time,the prediction model based on the fusion image also achieves better prediction results than other methods.(2)Prediction of chemoradiotherapy sensitivity of ESCC by PET-CT image fusionTo predict the chemoradiotherapy sensitivity of patients with esophageal squamous cell carcinoma and avoid over-treatment,this paper proposes a PET-CT image fusion algorithm based on texture similarity and pulse coupled neural network and constructs an image omics prediction model on the fusion image.Firstly,the pet and CT images are transformed by Nonsubsampled shear wave transform;secondly,the local binary pattern features are extracted from the high-frequency subbands of the two images,and the texture similarity measure is set.The high-frequency fusion parameters are generated according to the texture similarity between continuous sliding windows,and the weight parameters of the low-frequency subbands are calculated by the PCNN model.Finally,the fusion image generated by the inverse transform is processed The sensitivity prediction model of chemoradiotherapy was established by using the least absolute contraction selection operator combined with logistic regression.The experimental results show that the proposed fusion algorithm retains more image details than the traditional methods;meanwhile,the radiomics scoring model based on the fusion image shows strong robustness to the real clinical data from two medical institutions.(3)PET-CT image fusion and its effect on target delineation of non-small cell lung cancerIn the radiotherapy target delineation of non-small cell lung cancer,the ability to distinguish mediastinal lymph node metastasis and atelectasis by CT image alone is limited.Combining with PET image is expected to improve the accuracy of target delineation and reduce the risk of radiotherapy complications.Therefore,this paper proposes a PET-CT image fusion method based on Nonsubsampled contourlet transform and visual saliency and analyzes its influence on the target delineation of non-small cell lung cancer radiotherapy.Firstly,PET-CT image is decomposed into low-frequency and high-frequency subbands by nonsubsampled contourlet transform;secondly,low-frequency subband fusion rules are formulated by using visual saliency detection method,and maximum absolute value fusion strategy is adopted for high-frequency subband image.Finally,the fusion image is generated by inverse transform.Finally,the effects of image fusion on target delineation of NSCLC patients were compared by manual delineation and automatic segmentation.The experimental results show that the fusion algorithm has a better visual effect than traditional methods,and can significantly improve the accuracy of manual delineation and automatic segmentation of radiotherapy target.(4)Co-segmentation of intracranial tumors based on MR-CT image informationTumor target delineation is essentially a problem of image segmentation.Aiming at the automatic delineation of intracranial tumor target,this paper proposes an improved U-Net model which combines MR and CT information based on a collaborative mechanism.Combined with the characteristics of CT and MR images of intracranial tumors,the algorithm uses dual-path input mode to extract CT image information and integrate it into the feature map of each channel of U-Net.The weight parameters of each feature channel are generated by learning.At the same time,in the process of training,the Tversky loss function and crossentropy loss function are set as dual objectives.And then the decision-making fusion of the output prediction results is carried out,and the results are optimized by the condition random field.Experimental results show that the proposed algorithm achieves better segmentation results than other methods.
Keywords/Search Tags:radiotherapy, multimodality, image fusion, radiomics, image segmentation
PDF Full Text Request
Related items