Font Size: a A A

Deep Learning Based Cross Modality Image Synthesis

Posted on:2020-11-17Degree:DoctorType:Dissertation
Country:ChinaCandidate:L XiangFull Text:PDF
GTID:1484306218491154Subject:Biomedical engineering
Abstract/Summary:PDF Full Text Request
Multi-modality medical imaging is increasingly used for comprehensive assessment of complex diseases in either diagnostic examinations or as part of medical research trials.Different imaging modalities provide complementary information about living tissues.However,multi-modal examinations are not always possible due to adversary factors such as patient discomfort,increased cost and prolonged scanning time.In addition,in large imaging studies,incomplete records are not uncommon owing to image artifacts,fast imaging with low resolution,data corruption or data loss during imaging,which results in low-quality image and compromise the potential of multi-modal acquisitions used for disease diagnosis.Base on the limitations in reality,it is hard to collect ideal multi modality data.In this dissertation,we focus on how to apply machine learning,especially deep learning method,to come across image synthesis to get access to effective,safe,and high-quality imaging for diagnostic treatment and medical research study.In this dissertation,we explore the problems of cross-modality synthesis using deep learning-based framework.The cross-modality synthesis could be single-modal cross-modality mapping and multi-modal cross-modality mapping.The mapping from T1-weighted magnetic resonance(MR)image to computed tomography(CT)image can be seen as single-modal cross-modality mapping.The input is corresponding to MR image while the output is corresponding to CT image.Multi-modal cross-modality mapping could be fusing low-dose positron emission tomography(LPET)and T1-weghted MR image to synthesize standard-dose PET(SPET),while inputs corresponding to LPET and T1-weghted MR image,and output corresponding to SPET.Another example is using under-sampled T2-weighted MR image(T2WI)and fully-sampled T1-weighted MR image(T1WI)to reconstruct fully-sampled T2WI,while the former two corresponding to inputs and the last one corresponding to output.Specifically,we develop a deep embedding convolutional neural network(DECNN)for tackling the mapping from T1-weighted magnetic resonance(MR)image to computed tomography(CT)image in an end-to-end way.Experimental results suggest that our DECNN demonstrates its superior performances,in terms of both the perceptive quality of the synthesized CT image and the run-time cost for synthesizing a CT image.For semi-cross-modality synthesis,we propose a deep auto-context convolutional neural network to estimate high-quality standard-dose PET image from the combination of the low-quality low-dose PET image and the accompanying T1-weighted acquisition from magnetic resonance imaging(MRI).Validations on real human brain PET/MRI data show that our proposed method can provide competitive estimation quality of the PET images,compared to the state-of-the-art methods.Meanwhile,our method is highly efficient to test on a new subject,e.g.,spending-2 s for estimating an entire SPET image in contrast to-16 min by the state-of-the-art method.These results demonstrate the potential of proposed method in real clinical applications.Moreover,another multi-modal cross-modality mapping is T2-weighted MR image reconstruction.We propose to combine complementary MR acquisitions(i.e.,T1WI and under-sampled T2WI particularly)to reconstruct the high-quality image(i.e.,corresponding to the fully-sampled T2WI)using Dense-Unet architecture.To the best of our knowledge,this is the first work to fuse multi-modal MR acquisitions through deep learning to speed up the reconstruction of a certain target modality MR image.The above cross-modality tasks including MR-to-CT,Standard-dose PET reconstruction,and T2-weighted MR image reconstruction,are all based on supervised methods that rely on paired data for training the complex model.In real scenarios it is not always easy to get the paired data to meet the training requirements.Thus,we explore a novel method for cross-modality image synthesis by training with unpaired data.Specifically,we adopt the generative adversarial networks and conduct the fast training in a cycle way.A new structural dissimilarity loss,which captures the detailed anatomies,is introduced to enhance the quality of the synthesized images.We validated our proposed algorithm on three popular image synthesis tasks,including brain MR-to-CT,prostate MR-to-CT,and brain 3T-to-7T.The experimental results demonstrate that our proposed method can achieve good synthesis performance by using the unpaired data only.
Keywords/Search Tags:Cross-modality image synthesis, deep learning, fast MR reconstruction, unpaired data
PDF Full Text Request
Related items