Font Size: a A A

Research On Multi-source Image Fusion Technology Based On Convolutional Neural Network

Posted on:2020-05-29Degree:MasterType:Thesis
Country:ChinaCandidate:Y X LiFull Text:PDF
GTID:2428330602450743Subject:Microelectronics and Solid State Electronics
Abstract/Summary:PDF Full Text Request
Multi-source image fusion refers to merging multiple images with complementary information into a single image with the best information.Multi-source image fusion can be subdivided into multi-focus image fusion,multi-exposure image fusion,infrared and visible image fusion and so on based on different input images.At present,these multi-source image fusion technologies have been widely applied in many fields,such as military and civilian.Therefore,it is of great theoretical value and practical value to study multi-source image fusion methods with higher performance,less computational resource consumption and stronger robustness.In this paper,we first study the basic principles of several classical multi-source image fusion methods,including the traditional image fusion methods and the convolutional neural network(CNN)based method.The traditional image fusion methods include simple weighted average method and multi-scale decomposition(MSD)based method.Compared with traditional methods,the CNN-based approach has obvious advantages.It automatically generates the fusion strategy which needs to be carefully designed manually in traditional methods by CNN network,and solves the problem of inadequate fusion accuracy and poor adaptability in traditional fusion methods.The multi-focus image fusion method based on classification network pioneered the introduction of image classification network into image fusion application.The subsequent multi-focus image fusion methods basically developed from the method.However,due to the use of classification networks and block-based strategies,the fusion effect of such methods at the boundary of focused region and defocused region is not perfect,often resulting in the phenomenon that the edges are not sharp and the contents of focused region and defocused region are mixed.In addition,the fusion effect of these methods also depend on carefully designed but cumbersome post-processing process,which are not suitable for practical application.Aiming at the problems of the existing methods,a new multi-focus image fusion method based on end-to-end CNN is proposed in this paper.This network can directly generate the final fusion decision map from source images,and the multi-scale feature extraction unit and the attention unit are introduced to maximize the extraction of cross-scale complementary structural feature information and accurately separate the focused and defocused regions,which greatly enhance the network's fusion performance.Compared with several existing advanced methods,the fusion effect of this method is better and the real-time performance is stronger.In addition,in order to solve the problem of poor fusion effect under extreme exposure conditions and complex fusion process in existing multi-exposure image fusion methods,this paper also proposes a multi-exposure image fusion method based on unsupervised learning.Unlike most existing methods,this method does not require much time and manpower to acquire multi-exposure images under various special conditions.Instead,it uses ordinary natural images to generate source images automatically through simulation method.In view of the large difference in brightness of multi-exposure images,the method firstly separates the chroma channel and the luma channel of the source images,and adopts different fusion strategies to fuse the two kinds of channel.The fusion of luminance channels is accomplished through CNN network,which introduces the idea of hierarchical processing and adopts a new weighted structural similarity index as a loss function to improve fusion accuracy.Subjective and objective experimental results prove that this new method outperforms the existing classical methods in visual effect and quantization fusion accuracy.Moreover,the proposed network model can also be applied to infrared and visible image fusion,with strong application scalability.
Keywords/Search Tags:image fusion, convolutional neural network, multi-focus, multi-exposure, unsupervised learning
PDF Full Text Request
Related items