In clinic,the diagnosis and treatment of most diseases require medical images as auxiliary tools.However,due to the inherent characteristics of medical images such as low resolution,strong noise,and complex tissue structures,coupled with the huge amount of clinical cases and the heavy burden on radiologists,radiologist-based image reading can easily lead to missed and misdiagnosed diseases,which is fetal for patients who need a timely intervention.Therefore,an efficient model that automatically pro-cesses and analyzes medical images is extremely urgent in clinical practice.Automated calculation methods for traditional medical imaging and diagnosis problems mostly rely on classic machine learning algorithms,which often require manual selection of specific imaging features(such as gradients,etc.).However,in general,the features of manual feature extraction are very limited,and image information cannot be used well,result-ing in low reliability of the final result.In recent years,deep learning has made great progress in medical image computing and clinical auxiliary diagnosis with its powerful information extraction capabilities.In view of this,this study mainly explores the in-telligent computation methods of medical imaging and diagnosis problems on the basis of deep learning.Generally speaking,the successful application of medical imaging in clinical di-agnosis and treatment requires three stages:image reconstruction and denoising,dis-ease detection and lesion segmentation.Image reconstruction and denoising provide high-quality tissue detail assurance for subsequent clinical analysis;disease detection provides radiologists with accurate disease detection results,and lesion segmentation provides lesion details for clinical interventions(such as tumor resection).Based on the three stages of clinical application of medical imaging,this paper studies the three problems in these three stages:CT reconstruction and metal artifact reduction,14 dis-eases detection in chest X-ray images,and three-dimensional multi-modal breast tumor segmentation.The details are listed as follows:(1)CT metal artifact reduction.The existing deep-learning-based CT metal arti-fact reduction methods were generally performed in a single domain(sinogram domain or image domain)or dual domain.Among them,the more advanced is the dual-domain algorithm based on the sinogram domain and the image domain.However,the existing dual-domain algorithms fail to avoid the interference of invalid information from cor-rupted areas when repairing sinograms,resulting in some serious secondary artifacts in the final result.Besides,the existing methods did not make the information of different domains interact effectively,and do not make full use of the auxiliary characteristics of cross-domain information.Therefore,this paper proposes two new dual-domain metal artifact reduction networks,one is designed to solve the interference problem of invalid information in the sinogram restoration process,and the other allows the information be-tween different domains to assist and promote each other,and then reduce more metal artifacts and related noise.The experimental results on different parts of the human body show that the two newly proposed methods have achieved a better performance than previous methods,which reflects the effectiveness of the proposed methods in clin-ical practice.(2)The detection of 14 diseases in chest X-ray images.Existing automatic dis-ease diagnosis methods based on X-ray chest images usually extracted disease-related features from images and used them directly for classification,or simply divide and encode lung regions to obtain some location information of diseases.However,the current methods did not provide detailed location information of the disease,and does not associate the location of the disease with the possible disease type.Based on this,this paper proposes a relative location information perception network,which tries to combine the common location of disease and disease type to improve the accuracy of disease diagnosis.The experimental results on the X-ray chest image dataset containing 14 types of diseases show that compared with previous chest disease diagnosis methods,the proposed method has achieved state-of-the-art results.(3)3D multimodal breast tumor segmentation.Existing deep-learning-based multi-modal breast tumor segmentation methods were mostly based on two-dimensional im-age segmentation,and used information from one modality to assist tumor segmenta-tion in another modality.Among them,the method of multi-modal information fusion is mainly concatenation,which simply superimposed different modal information,and does not make full use of the mutual assistance of different modal information.There-fore,this paper proposes a new cross-modal information interaction 3D multi-modal breast tumor segmentation network for simultaneous segmenting breast tumors in dif-ferent modalities.In the network,the information between different modalities interacts and promotes each other during the whole network training process,which effectively reduces the potential false positives or false negatives in the segmentation results.Ex-periments on a clinical dataset show that the newly proposed method has achieved good clinical results and can correct some errors manually marked by radiologists,which im-plies the proposed method has a good clinical application prospect. |