| The medical images,which can present the anatomical information noninvasively,have been widely utilized in clinical to assist the physician for more accurate diagnosis.Combining the complementary information from different modality can further improve the diagnostic accuracy.However,constrained by the physical properties,contraindication and cost,it is challenging to acquire medical image with adequate modalities and high quality clinically.Therefore,the medical image computation that can reconstruct the high-quality multi-modalities medical image has attracted lots of attention.Currently,the medical image reconstruction methods based on deep learning have made some success though,there are still some problem such as the unclear details and inconsistent content.In this paper,the studies are carried out to for proposing the finer perceptive super-resolution and cross-modality reconstruction model to synthesize the high-quality multi-modality medical images.For the super-resolution of brain magnetic resonance(MR)image,a frequency parallel super-resolution model,called Fine Perceptive Generative Adversarial Networks(FP-GANs),is proposed to perceive and recover the finer texture details of MR images.By adopting the divide-and-conquer strategy,the wavelet-transformation and generative adversarial networks are composed to super-resolve the low-frequency(LF)and high-frequency(HF)components of MR image separately and parallely.Specifically,FP-GANs firstly decomposes the MR images low-frequency approximation and high-frequency textures(horizontal,vertical,diagonal textures)in wavelet domain with the wavelet transformation,then 4 sub-GANs concentrate on super-resolving the corresponding textures simultaneously.In this way,the proposed model can capture the HF details.In order to further enhance the detail perceptive performance of FP-GANs,a sub-band attention mechanism that trades off the weight from perspective of sub-band is proposed.Besides,the joint loss function that can guide the optimization from wavelet domain and image domain is proposed to make the super-resolved result more consistent.With the extensive experiments on Multi Res_7T and ADNI dataset,it demonstrates that FP-GANs achieves finer super-resolution with greater consistency than the competing methods.What’s more,the classification on the Alzheimer’s Disease shows that the SR MR images contribute to improve the classification accuracy,which further demonstrates the clinical value of the proposed model.For the cross-modality reconstruction,a dual stages cross-modality generative adversarial network(DSGAN)is proposed to fully extract and disentangle the multi-scales shared latent features and frequency features.In the coarse stage,a texture extractor based on feature pyramid network is employed to mine the multi-scales latent features from the source modality images bottom-up and generate the coarse target modality images top-down.Besides,a joint encoder is employed to encode the latent features and frequency features jointly and get the joint codes from high-level to low-level.In this way,the shared features between the source modality and target modality can be fully mined and decoupled.Besides,the joint codes increase the information.In the fine stage,a fine texture generator based on generative and adversarial network,which decodes the joint codes progressively and parallely from high-level to low-level feature,is employed to synthesize the finer target modality images.This contributes to stabilize the reconstruction procedure and thus alleviate the inconsistent information like artifacts.With the comprehensive experiments on ADNI dataset,it demonstrates that the proposed model can synthesize the target modality images with greater consistency by preserving the anatomical structure and pathological information from the source modality images. |