The emergence of medical imaging has revolutionized clinical diagnosis,improving accuracy,speed,and precision,and ushering in a new era of non-invasive diagnosis and personalized treatment.With the advent of artificial intelligence technology,innovation and development in the field of medical imaging has been further accelerated.In these innovations,multimodal imaging occupies an important position,referring to the acquisition of multiple modes of image information using different medical imaging techniques,such as computed tomography(CT),magnetic resonance imaging(MRI),positron emission tomography(PET),etc.Because different imaging techniques can provide different types of information,multimodal imaging can more comprehensively and accurately reflect the condition,providing important support for clinical diagnosis and treatment.It is important to study the algorithm of multimodal medical imaging and its application for clinical aid diagnosis.Multimodal medical imaging research faced several challenges as follows: multimodal image registration,since different imaging technologies may produce different image forms and spatial distributions;multimodal image fusion.Exploring inter-modal crossover and complementary information to improve diagnostic accuracy and treatment performance.Successfully overcoming these challenges will provide more comprehensive and accurate information for clinical diagnosis,surgical navigation,treatment guidance,and survival prediction.Survival prediction is an important tool in assisting clinicians to further treat postoperative cancer patients,by assessing their survival after malignant tumor resection.Radiomics research has provided molecular-level prognostic indicators for survival prediction,by extracting quantitative features from conventional medical images to find the relationship between image features and survival probability.Deep learning methods using convolutional neural networks automatically extract high-throughput features,further improving the accuracy of radiomics in survival prediction,and promoting the development of personalized precision medicine.This article utilizes deep learning methods to address the registration and fusion of multimodal medical images,and proposes a novel survival prediction approach based on multimodal CT.The research focuses on three main aspects and innovations:(1)We proposed a multimodal registration framework that is suitable for aligning multiple fixed images in mobile images,achieving multimodal registration with three or more modalities.Our approach uses an alignment strategy based on inverse mapping and deformation fields to align the subtasks of multimodal registration simultaneously during the registration operation performed in the opposite direction.This improves the reversibility and consistency of the generated deformation fields,providing better generalization ability for subtasks.We conducted experiments on the registration of MRI images of glioblastoma multiforme in multiple modalities(FLAIR to T1 and T2).Compared to current state-of-the-art multimodal registration methods,our approach achieves better registration results.(2)We designed a multimodal CT-based deep learning survival prediction model aimed at assisting TNM staging and improving the inaccurate staging of stage II-III colorectal cancer patients.The model can extract multi-modal CT data and self-learn high-throughput features without relying on precise Volume of Interest(VOI)delineation.By setting the loss function,we analyze the relationship between features and patients’ disease-free survival,outputting the patient’s risk score as an indicator of high and low-risk stratification.The risk score can be used as an important influence factor to assist in TNM staging.Experimental results show that our model outperforms radiological and traditional clinical models in terms of prediction accuracy.Additionally,we discussed the interpretability of the model,which is of great concern in medical applications.(3)We proposed a multimodal image fusion module applied in survival prediction models.We use a self-learning fusion scheme based on an attention mechanism to calculate the weights of the contribution of two modalities of CT to predict disease-free survival in colorectal cancer patients.Then,the images of the two modalities after weighting are fused at the channel level.This approach not only improves the prediction performance,but also allows better use of information from multimodal data to inform future medical diagnosis and treatment.Experimentally comparing unimodal CT as the input of survival prediction model,multimodality has better prediction performance. |