Font Size: a A A

Machine Learning Classification Of Multi-modal Medical Images

Posted on:2022-07-13Degree:DoctorType:Dissertation
Country:ChinaCandidate:P GuoFull Text:PDF
GTID:1520306731467824Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
In recent years,with the development of artificial intelligence and machine vision,the classification and detection of multi-modal images by machine learning have gained great ap-plication in the field of clinical medicine assisted diagnosis.However,due to the low imaging quality of medical image,blurred boundaries,insufficient labeling samples and training sam-ples,high complementarity of multi-modal images and the difficulties of fusion,the problems of its medical image classification in accuracy,generalization ability and system robustness are becoming more and more prominent.Some scholars have studied multi-modal medical image classification from the aspects of deep learning network model and feature fusion.These re-searches have improved the classification performance to some extent,but there are still some key problems to be solved,including the determination of the number of scales in multi-scale super-pixel segmentation based on clustering,the optimization design of multi-scale decompo-sition level and fusion rules in pixel-level fusion classification,and the correlation representa-tion of multi-features in high-dimensional data of feature-level fusion classification.Therefore,it is still a challenging task to study the classification method of multi-modal medical images by machine learning.The related technologies of pixel-level fusion and feature-level fusion machine learning classification are studied in the paper.Aim to solve the above problems,we focus on multi-pixel fusion and multi-feature fusion classification,and study the corresponding multi-scale superpixel segmentation based on cluster analysis,multi-feature direct classification for smal-l samples,image fusion based on convolutional morphological component analysis,and im-age fusion based on convolution sparse representation and mutual information-related in non-subsampled shearlet transform domain.The main work and innovation points of this study are summarized as follows:(1)A cluster learning method based on hypergraph Markov chain relaxation and the corre-sponding multi-scale superpixel segmentation method are proposed.This clustering problem is decomposed into two subproblems.The first subproblem is using the Markov chain relaxation of the hypergraph model,which is,through random walk and diffusion mapping,to optimize the number of clusters.The second subproblem is to make Markov loosely converge to a mean-ingful geometric structure with a small amount of information loss by means of the mutual information target loss function.(2)A semi-supervised,feature-level collaborative classification method for medical im-ages is proposed to solve the problem of multi-modal image correlation.The principle is to unify the complementary features and correlation features of multimodal images by transduc-tive learning framework.The experimental results on Bra Ts13 image data set show that this method has advantages over semi-supervised multi-view distance metric learning algorithm,multi-view random learning method based on high order distance,and Laplace support vector machine algorithm in the case of insufficient labeled samples.(3)A pixel-level fusion classification method,namely,an image fusion method based on convolutional morphological component analysis and guided filtering,is proposed.The method mainly focuses on two problems.The first problem is to alleviate the edge artifacts of linear transformation by guiding filter operators,which are usually caused by noise interference in SR-based transformation domain method.The second problem is to guarantee the sparsity of sparse coding by the new maximum fusion scheme.By testing medical images from the Whole Brain imaging data set,compared with the convolution sparse representation fusion algorithm,the convolution sparse representation fusion algorithm combined with morphology analysis,and a spatial fusion algorithm based on guided filtering,the method in En,Qe,MI,QAB/Fand VIF indicators have a greater advantage.(4)pixel-level fusion classification method,namely,the NSST domain CSR and mutu-al information related image fusion method,is proposed.The method mainly focuses on two problems.The first problem is the decomposition scale of NSST transformation.Appropriate decomposition scale is conducive to extracting sufficient spatial details while avoiding the noise sensitivity of high-frequency subband fusion.The second problem is the fusion strategy choice.It is the strategy that no similar area using Laplacian gradient maximum energy fusion based on mutual information correlation regions,in the similar area using the weighted average fusion center pixel energy plan.Experimental results on the Whole Brain Atlas image dataset shows that,compared with the 7 newer fusion methods in recent years,the proposed method has good robustness in both subjective effects and objective evaluation metrics.
Keywords/Search Tags:Multi Modal, Machine learning Classification, Feature Fusion, Convolution Sparse Representation, Transductive Learning, Clustering
PDF Full Text Request
Related items