Font Size: a A A

The Research On Image Fusion Based On Multiscale Filter And Sparse Representation

Posted on:2014-08-22Degree:DoctorType:Dissertation
Country:ChinaCandidate:J W HuFull Text:PDF
GTID:1268330428968899Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Image fusion is an important way of integrating multisensor image informationin the field of image processing and computer vision. It is the process of combininginformation from two or multiple images of the same scene or object into a singlecomposite image, which is more informative and is more suitable for visualperception and computer processing. At present, image fusion has been widely used inthe field of military, medical diagnosis, remote sensing, etc. Based on the classicalmultiscale theory, the recent sparse representation and low rank representation theory,multimodal image fusion methods and remote sensing image fusion methods areproposed in this paper. The main contributions of this thesis are as follows.Firstly, a remote sensing image fusion method based on multiscale joint filter isproposed. Multiscale methods have been widely used in remote sensing image fusion.Multiscale methods are used to extract details of panchromatic images, which areinjected into low resolution multispectral images to increase its resolution. Thespectral response of different bands is different, which results in that the details ofpanchromatic images are different from that of multispectral images. Traditionalmultiscale methods to extract details ignore this problem. This thesis uses dualbilateral filter to simultaneously consider the characteristics of panchromatic imagesand multispectral images. To extract multiscale details, multiscale dual bilateral filteris proposed. Considering that translation invariance is important for image fusion, theĆ  trous scheme is used in the multiscale dual bilateral filter to ensure translationinvariance.Secondly, the multiscale directional bilateral filter is developed and is applied tomultimodal image fusion. We know that edge and directional features are verycommon in various images, which means that edge and directional features are veryimportant for image processing. The multiscale directional bilateral filter is developedto combine edge preserving and direction capturing. The multiscale bilateral filterwith the ability of edge preserving is firstly applied to source images to obtain a lowfrequency subband and several high frequency subbands. Then, directional filter isapplied to high frequency subband to capture directional feature. The multiscaledirectional bilateral filter is used for multimodal image fusion to verify itseffectiveness. Source images are decomposed to a low frequency subband and several directional subbands by the multiscale directional bilateral fitler. According to a givenfusion rule, these subbands of different source images are fused. The inversemultiscale directional bilateral filter is applied to fused subbands to obtain the fusedimage. The experiments over infrared-visible multimodal images and medicalmultimodal images indicate the effectiveness of the proposed method.Thirdly, a remote sensing image fusion method based on sparse representation isproposed. Sparse representation is a novel image representation theory, which cansparsely represent an image on overcomplete dictionary to obtain sparse coefficients.These coefficients and their corresponding atoms reveal the intrinsic properties ofimages effectively. Primary visual cortex neurons process the images from retina inthe same way of sparse coding. Combining sparse representation and generalized IHStransform, a novel remote sensing image fusion method is proposed in this thesis.Human visual system is insensitive to red, green and blue band information, and issensitive to intensity, hue, saturation information. Considering this characteristic, thegeneralized IHS transform is used to obtain intensity component of multispectralimages. The intensity component and panchromatic images are merged by sparserepresentation to obtain the fused intensity component. The details of differentspectral bands are different, and the direct inverse generalized IHS transform willresult in that the details added to each band are the same. To better improve spatialresolution and preserve spectral fidelity, the amount of details added to each band isadjusted according to their pixel value.Finally, a multimodal image fusion method is proposed through combiningnonlocal operator and low rank representation. The aim of image fusion is to integratethe salient information of source images. The salience depends on not only localinformation but also nonlocal information. Traditional image fusion methods only uselocal information and neglect nonlocal information, which result in that some detailsin the fused image are lost. Thus, a new multimodal image fusion method combiningnonlocal operator and low rank representation is proposed in this thesis. For an imagepatch in the source image, its similar patches are searched in the nonlocal domain.These similar patches are represented in the way of low rank, and low rank canimpose the consistency among similar patches. Therefore, the proposed method caneffectively integrate the salient information of source images.
Keywords/Search Tags:Multimodal Image Fusion, Remote Sensing Image Fusion, SparseRepresentation, Low Rank Representation, Bilateral Filter
PDF Full Text Request
Related items