Font Size: a A A

Research Of Neural Networks Methods For Multi-Modal Image Fusion

Posted on:2020-03-03Degree:MasterType:Thesis
Country:ChinaCandidate:R C HouFull Text:PDF
GTID:2428330572480081Subject:Systems Engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of computer science,integrated circuit system and sensor technology,imaging is no longer limited to a single-modal sensor.Nowadays,in the face of diversified image types and complex data,the single-modality image is hard to satisfy the demand of modern applications,so the image fusion technologies will be further theoretically lucubrated and exploited.Image fusion complements different modal images which obtained from different sensors and generate informative and reliable fused image to achieve a comprehensive description of a scene or target,it is conducive to subsequent image processing or decision-making tasks.The core idea is to optimize image data and extract meaningful information without any artificial interference.Multi-modal image fusion is widely aimed at developing modern military,video surveillance,remote sensing,medical image analysis and so on.In this graduation dissertation,we make an in-depth research on infrared and visible image fusion and multi-modal medical image fusion to explore the application of artificial neural network in image fusion field.The main contributions of this dissertation are as follows:1.On the study of the optimized artificial neural network based infrared and visible image fusion,considering the traditional pulse-coupled neural network has complex parameters and lack of adaptability,we propose an infrared and visible image fusion method using the optimized neural network which combining the bee colony optimization algorithm and spiking cortical model(SCM).Firstly,non-subsampled shearlet transform(NSST)is utilized to decompose the source image into a low-frequency coefficient and a series of high-frequency coefficients.Secondly,the low-frequency coefficient is fused by SCM,where SCM is motivated by the salient decision map of the low-frequency coefficient,and then the high-frequency coefficients are also fused by SCM,where the modified spatial frequency of the high-frequency coefficients is adopted as the input stimulus of SCM,the parameters of SCM are optimized by the novel multi-objective artificial bee colony technique.Finally,the fused image is reconstructed by inverse NSST.Experimental results indicate that the proposed method can effectively address the problem that the parameters of neural network setting based on artificial experience,which has adaptability and robustness.The results contain rich thermal information and details.2.On the study of the combination of traditional neural network and deep convolution neural networks based multi-modal medical image fusion.The aim of medical image fusion is to improve the clinical diagnosis accuracy,so the fused image is generated by preserving soft tissue and skeleton information of the source images.This paper designs a novel fusion scheme for CT and MRI medical images based on convolutional neural networks(CNNs)and dual channel spiking cortical model(DCSCM).Firstly,non-subsampled shearlet transform(NSST)is utilized to decompose the source image into a low-frequency coefficient and a series of high-frequency coefficients.Secondly,the low-frequency coefficient is fused by CNN framework,where weight map is generated by a series of feature maps and an adaptive selection rule,and then the high-frequency coefficients are fused by DCSCM,where the modified average gradient of the high-frequency coefficients is adopted as the input stimulus of DCSCM Finally,the fused image is reconstructed by inverse NSST.Experimental results indicate that the proposed scheme performs well in both subjective visual performance and objective evaluation and has superiority in detail retention and visual effect over other current typical ones3.On the study of unsupervised deep convolution neural netw orks based infrared and visible image fusion.In recent years,convolutional neural networks(CNNs)have recently been applied to information fusion.However,the existing methods utilized the pre-trained CNN model as a feature extractor,thus CNNs have not learned to integrate or select deep features adaptively.In this paper,we present unsupervised end-to-end learning framework for infrared and visible image fusion.First,we try to generate enough benchmark training dataset using source visible and infrared images and then the deep network is trained by using a robust hybrid loss function which consists of the modified structural similarity(M-SSIM)metric and the total variation(TV).Finally,extensive experimental results demonstrate that the proposed architecture performs better than state-of-the-art methods in both subjective and objective evaluation.
Keywords/Search Tags:Image fusion, Visual saliency, Spiking cortical model, Multi-objective optimization algorithm, Convolutional neural networks
PDF Full Text Request
Related items