| Medical image segmentation is a key issue in medical image processing field,which consentreates on extracting ROI of lesions and plays a basic role in medical imaging analysis and clinical diagnosis.Doctors cannot handle the large complexed medical image datas only depend on their experience.In addition,for segmentation on multi-modal and multi-center datas,the performance of deep learning model is unsatisfactory and high-dimensional medical image segmentation methods are rarely studied.Therefore,it is of great significance in the development of efficient and accurate medical image segmentation algorithms in clinical applications.This paper studies the key problem in medical image segmentation,mainly for brain image skull removal problems and the left ventricular myocardial segmentation of the living heart:(1)A deep iteration fusion network(DIFNet)is used to segment the skull.The main structure of DIFNet is composed of an encoder and a decoder,and the jump connection mode in the middle is composed of multiple up-sampling iterative fusion.The encoder is composed of residual convolution,so that the shallow semantic information can flow into the deep network more easily and avoid the phenomenon of the disappearance of the gradient.The decoder network is composed of two-way sampling modules.Through deconvolution operations with different receptive fields,the output feature maps are added together as a module output,which effectively restores more detailed features.The Dice loss function with L2 regularization is introduced to train the network model.In order to verify the segmentation performance of the model,this article uses multiple public data sets and multi-modal data provided by Guizhou Provincial People’s Hospital to compare with traditional software segmentation methods and deep learning models,and quantify them from indicators such as Dice value,sensitivity and specificity.analysis.The experimental results prove that the DIFNet model proposed in this paper can quickly and accurately remove the skull.Compared with the mainstream skull segmentation model,the accuracy is higher,and the model has better robustness and generalization ability.(2)Using the idea of deep learning,an optical flow and semantic feature fusion segmentation network(OSFNet)is designed to segment the left ventricular myocardium of the living heart.The model uses the optical flow field to calculate the heart motion characteristics between different moments,and merges them with the texture characteristics of the image itself to improve the model’s sensitivity to the left ventricular motion area.The model mainly includes encoder and decoder.The multireceptive field pooling operation is added to the encoder stage to reduce the loss of semantic features.The decoder stage uses the idea of multi-branch restoration of image features to ensure that the model can maximize the restoration of the pooled features.The network model uses the weighted Dice loss function as the optimization objective.This article uses public data sets for training and testing,and uses indicators such as Dice value,Jacquard’s similarity coefficient,and Hausdorff distance for quantitative analysis.Experimental results show that the model fused with motion features can improve the segmentation performance.Compared with multiple benchmark models,the segmentation performance of the model proposed in this paper is greatly improved,and it can accurately segment the left ventricular myocardium. |