Image fusion integrates the images obtained by different sensors in the same scene into a new image,which has richer detailed information and can clearly identify all objects in the scene.It is conducive to further image processing and analysis.With the in-depth research in the field of image fusion,a variety of image processing methods have been proposed.Multi-scale transform tools such as curvelet transform,contourlet transform,shearlet transform,nonsubsampled shearlet transform(NSST),etc.can can perform multi-scale decomposition of the image,and extract salient features from the multi-scale decomposition layers,which help improve the accuracy of feature extraction.The NSST transform can meet the requirements of different scales and directions due to its good multiresolution analysis characteristics and multi-anisotropy,and has good application prospects.Research has found that there is a correlation between multi-scale coefficients,and statistical models can be used to obtain an accurate representation of the coefficients and improve the quality of image fusion.Therefore,the study of the statistical correlation of multi-scale decomposition coefficients,and extracting statistical features in the transform domain for image fusion have great research value and theoretical significance.This paper conducts an in-depth exploration on the extraction of more accurate statistical features in the transform domain and applies them to image fusion.The main research contents of this paper are as follows:(1)The existing Contextual Hidden Markov Model(CHMM)divides the coefficients only into two states and leads to a rough modeling result.The fusion method based on multi-state contextual hidden Markov model(MCHMM)is proposed.Multi-states correspond to different levels of detail of the coefficient.A multi-state zero-mean Gaussian mixture model(GMM)is used to characterize the distribution of high-frequency subband coefficients and a soft context variable is designed to accurately describe detail of the coefficients from the perspective of context.Then the MCHMM model is built on the high-frequency coefficients to extract the contextual statistical parameters.The proposed method improves the fine-grained of the model and can represent the image more accurately.Experimental results show that the proposed algorithm can obtain high-quality fusion images.(2)The CHMM model also has the problem of inaccurate modeling.Inappropriate parameter settings can reduce the expressive ability of the model,and result in sub-optimal fused images.Therefore,a multi-modal image fusion method based on the interval type-2 fuzzy set contextual hidden Markov model(T2-FCHMM)is proposed.The interval type-2 fuzzy sets are used to evaluate the uncertainty of the CHMM model,the T2-FCHMM model is established.Fuzzy entropy is introduced to evaluate the fuzziness of the T2-FCHMM,which improves the generalization ability and robustness of the model.The activity measure of the high-frequency coefficients consisted of the statistical characteristics and regional energy of the high-frequency sub-band coefficients is used to guide the fusion of high-frequency sub-bands.The low frequency subband is adaptively weighted fused based on the regional energy and variance,which can better maintain the contrast of the source image.The experimental results show that the proposed algorithm has achieved superior results in both subjective visual perception and objective evaluation.(3)The above-mentioned CHMM model context scheme only calculates the single coefficient,and ignores the influence of local or even global coefficients.A multi-input cellular neural network(MCNN)based CHMM model is proposed for image fusion.The dynamic propagation effects of MCNN is used to obtain globally optimized context variables,and further improves the accuracy and robustness of feature extraction through the iteration of the network.Then the CHMM model is established on the high-frequency coefficients,and the fused high frequency subbands are obtained by weighted fusion rules based on the detail of the coefficients.The low frequency subband adopts the choose max rule based on regional energy of the low frequency subband coefficients.Experiments have proved its effectiveness.(4)The above method only models for a single source image,and does not consider the correlation between the source images.Therefore,an image fusion method based on the similarity of multiple features extracted by cellular neural network(CNN)is proposed.In order to overcome the problem that a single feature cannot accurately interpret the image,multi feature representation of image is extracted based on the regional energy,regional variance and CHMM statistical features of the coefficients.Multi-feature similarity is calculated to measure the difference in detail information between two images,which helps to improve the accuracy and reliability of the fused image.Different fusion strategies are adopted for areas with different attributes to improve the clarity of the fusion image.Experimental results show that the proposed algorithm has superior fusion performance. |