| Medical data,with their diversity and multidimensional characteristics,serve as the cornerstone of contemporary medical and health research.These datasets span from micro to macro levels and include rich information from static images to dynamic videos,providing crucial insights for early disease detection,diagnosis,and treatment.However,the complexity of these datasets also presents challenges for accurate analysis and effective utilization.Especially in the field of medical image processing and analysis,extracting useful information from a vast amount of multimodal medical data has become a key issue that needs to be addressed.Visualization technology,as a powerful tool for data processing,can transform complex medical data into intuitive images or videos,greatly facilitating the understanding and application of medical information.With the effective visualization of high-dimensional data,physicians and researchers can more directly observe the hallmark features of diseases,thus achieving more accurate diagnoses and more effective treatment planning.Furthermore,with the advancement of deep learning technology,medical image analysis methods incorporating visualization techniques have demonstrated great potential and broad application prospects.Against this backdrop and addressing these needs,this dissertation delves into high-dimensional visualization methods for multimodal medical data based on deep learning and proposes the following innovative studies:A medical image data augmentation method based on Generative Adversarial Networks(GANs)is proposed.This method,by generating high-quality synthetic medical images,effectively expands the training dataset,significantly enhancing the generalizability and accuracy of medical image segmentation models.Compared to conventional data augmentation methods,our results demonstrate this approach effectively improves model performance when handling unseen medical images.An enhanced method for improving Z-axis accuracy in three-dimensional medical images based on a multiscale optical flow fusion network(MSFlow Net)is proposed.This method integrates intermediate optical flow estimation with an encoder-decoder structure to precisely capture minute variations within the images.By employing feature pyramids,the method facilitates accurate frame interpolation at arbitrary positions,significantly enhancing the continuity and overall quality of three-dimensional medical images.A heart sound data image processing and classification method based on the Partition Attention Network(PANet)is proposed.By efficiently converting heart sound signals into image representations and utilizing PANet for deep feature learning and classification,this method achieves high-precision identification of heart diseases.Experimental comparisons reveal that our method surpasses existing techniques in accuracy and efficiency for heart sound signal classification.A dynamic ultrasound image video generation method leveraging Transformer models is introduced,effectively integrating multimodal data including text descriptions,reference images,and blood flow spectra.This approach enhances the quality and realism of the generated ultrasound videos,providing valuable visual resources for the early diagnosis of fetal heart conditions.Comparative analyses with existing video generation technologies show that this method exhibits superior capabilities in generating dynamic medical images,highlighting its potential for improving diagnostic processes. |