Font Size: a A A

Study For The Key Algorithms In Multi-modal Medical Image Registration And Fusion

Posted on:2014-01-10Degree:DoctorType:Dissertation
Country:ChinaCandidate:L WangFull Text:PDF
GTID:1268330425476728Subject:Pattern Recognition and Intelligent Systems
Abstract/Summary:PDF Full Text Request
Nowadays, for the medical imaging, the anatomical images (CT, MRI, etc.) couldclearly provide the anatomical information of the organs with high resolutions but they couldnot show their functions while the functional images (SPECT, PET, etc.) are good at showingthe metabolism information but they could not tell the details of the morphological structuressince their resolutions are very low. In practical applications, the information cannot becompletely obtained when they are used alone.The best way to solve the above problem is to use the multi-modal medical imageregistration and fusion technologies, which facilitate better applications of the anatomical andfunctional information for they provide an easy but effective access for doctors to recognizethe lesion structures and functional change by studying the data of different modalities.Though traditional multi-modal medical image registration and fusion algorithms havebeen in great success, there are still some problems to solve for their following limitations:Firstly, the successful single-modality registration methods cannot be directly appliedinto the registration of multi-modal images. This is because the objective measurementfunctions used in single-modality registration methods are all based on the gray value, whichcannot describe the difference of different modalities. Though such measurement functionshave many advantages, such as the simple form, the easy implementation of the optimizationand fast speed, once they are used for multi-modal registration, bad registration results will beobtained since the source images are greatly different. Though some new measurementfunctions based on the gray distribution have been successfully proposed, such as the mutualinformation, entropy, they suffer from their complex form, the loss of the spatial featureinformation, which will result in the low robustness for only the local minimums can beobtained. Therefore,how to apply the measurement functions of the single-modal registrationmethods into the multi-modal image registration methods remains to be studied.Secondly, in multi-modal medical image fusion domain, the wavelet transform basedmethods have been popularly reported. The wavelet transform, however, can only provide thegood representations for the point features but not the high dimensional features, such as thecontours, edges. Though some new multi-scale geometric transform tools have beenintroduced into the medical image fusion domain, they have their own limitations. Theefficient representations of the high dimensional features are critical to improve the fusionperformance.Thirdly, most of the multi-scale geometric transform based multi-modal medical image fusion methods are all based on the same assumption that all the coefficients in differentsubbands are statistically independent. Therefore, the operations are directly implemented onthe single coefficient but not fully consider the statistical dependency between them, resultingin important feature information cannot be successfully transferred into the fused images.Finally, most of the existing medical image fusion algorithms are only proposed in thelow-dimensional space. Few fusion results have been reported in the high dimensional space.In this paper, we propose a series of improved algorithms to deal with the aboveproblems. The main research contents include:Firstly, based on the local feature descriptors of the medical images, the local featuredescriptors modality mapping(LFDMM) is proposed,which bridges the gap between themulti-modal registration algorithm and single-modal image registration algorithm by mappingthe images of different modalities into the same modality or the similar modality. TheLFDMM could not only make full use of the gray value of the images, but also the position ofeach point in the images, as well as the scales and orientations. With the help of LFDMM, themeasurement functions of the single-modal image registration algorithms can be applied intothe multi-modal image registration, and the robustness and accuracy will be benefited fromtheir respective advantages.Secondly, we carefully study the advantages and disadvantages of widely usedmulti-scale geometric transformations, such as wavelet transform, contourlet transform. Inorder to provide better representations for high-dimensional features in multi-modal medicalimages, a novel multi-modality medical image fusion algorithm based on the shift-invariantshearlet transform is proposed.Thirdly, to avoid the loss of the statistical dependency, we have carefully studied thestatistical properties of the highpass subbands coefficients of the SIST. Two statistical models,the hidden model and the explicit model are proposed to describe the dependency of differenthighpass subbands. To take full advantages of the statistical dependency in the medical imagefusion procedure, some dependency-embedding fusion rules are proposed.Finally, we have carefully studied the3D shearlet transform and revealed the statisticalproperties of the3D shearlet coefficients. Different from the traditional locally-computedfusion rules, a novel medical image fusion rule, named Global-to-Local rule, is proposedbased on the asymmetry of the Kullback-Leibler Distance (KLD) between two highpasssubbands.
Keywords/Search Tags:medical image registration, medical image fusion, modality mapping, shift-invariant shearlet transform, statistical dependency
PDF Full Text Request
Related items