Font Size: a A A

Multi-modal Image Reconstruction And Fusion

Posted on:2021-10-17Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y ZhangFull Text:PDF
GTID:1488306503462134Subject:Mathematics
Abstract/Summary:PDF Full Text Request
With the development of engineering technology,many imaging devices provide a variety of different modal images simultaneously.Taking medical images for example,multimodal images refer to CT(Computed Tomography),MRI(Magnetic resonance imaging),SPECT(Single-photon emission computed tomography),PET(Positron emission tomography),ultrasound images and electrical impedance tomography.Different medical images provide different information of the object organs.For example,CT and MRI show anatomical structure of organs with high spatial resolution,while PET,although with poor spatial resolution,provides functional activity of organ metabolism.In practical clinical applications,it is frequent that multi-modal images are used by doctors for acquiring complimentary information.In this thesis,we considered two types of problems: bi-modal joint image reconstruction and image fusion,and proposed variational and deep learning based approaches.For PET-MRI joint reconstruction problems,we proposed an edge driven weighted total variation model for bi-modal image reconstruction.Considering the structure difference between muti-modal images,this method does not force point to point edge consistency in bi-modal images.The common edge indicator is used to push images to be sparse under the transform in the smooth regions,which makes full use of the structural similarity between modalities.We applied a proximal alternating direction method to solve the proposed nonconvex model and established the convergence.At last,the numerical results demonstrate the performance and advantages of the proposed model.In clinical practice,it is necessary to fuse multi-modal images so as to visualize both pathological and functional structures for the purpose of accurate diagnosis and appropriate treatment planing.The second part of this thesis is for image fusion,where we proposed a three-step approach.A data driven tight frame is adaptively constructed from both images at the first step,for maximizing the sparsity of representation coefficients,which is beneficial for fusion process.And at the third step we proposed a variational reconstruction model combining both salient features and intensity in individual smooth regions.In numerical experiments,we applied the proposed model to CT-MRI,PET-MRI,multi-modal MRI and multifocus nature images fusion,and obtain good performance both visually and qualitatively,comparing to other existing methods.For the third part of this thesis,we attempt to tackle bi-modal reconstruction problem using deep learning network.We present an image restoration method tailored for scenarios where pre-existing,high-quality images from different modalities or contrasts are available in addition to the target image.Our method is based on a novel network architecture which combines the benefits of traditional multi-scale signal representation,such as wavelets,with more recent concepts from data fusion methods.In numerical simulations,T1-weighted MRI images are used to restore noisy and undersampled T2-weighted images.The results demonstrate that the proposed network successfully utilizes information from high-quality reference images and improve the restoration quality of the target image beyond that of existing popular methods.Finally,we discuss the covergence behavior of the network.
Keywords/Search Tags:Dual-modal images, joint reconstruction, fusion, edge-driven, weighted total variation regularization, data driven tight frame, variational method, image restoration network, multi-scale image representation, information fusion, and deep learning network
PDF Full Text Request
Related items