Font Size: a A A

Research On Multi-Focus Image Fusion Based On Deep Learning

Posted on:2020-10-17Degree:MasterType:Thesis
Country:ChinaCandidate:Z P NieFull Text:PDF
GTID:2428330623981123Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Multi-focus image fusion is a widely studied hotspot in the field of image fusion.It can overcome the depth of field limitation of imaging equipment,and combine multiple images focused in different regions into a completely clear all-focus image to improve the utilization of original image information.Multi-focus image fusion is a topic that involves multiple disciplines and has been widely used in medical,military,and public safety fields.Depending on how the image is processed,image fusion can be divided into three levels: pixel level,feature level,and decision level.Compared with the latter two levels of image fusion methods,the pixel-level image fusion method can effectively reduce the loss of information and retain more information in the image to be fused.However,most pixel-based fusion methods require empirically-dependent feature extraction methods to calculate the degree of focus,that is,the activity of pixel points.Therefore,this paper proposes a fusion method based on deep learning to automatically extract features.For pixel-level image fusion methods,how to more accurately detect the focus area and obtain richer edge information is the key to this method.The purpose of this paper aims to improve the ability of the image to be fused to focus recognition,to extract clear parts and edge details,and to study some problems of deep learning in multi-focus image fusion algorithm.The following improved algorithm is proposed.The main innovative work of the paper is as follows:1.The multi-focus image fusion method based on convolutional neural network requires a large amount of data to train,and an image fusion model based on deep features is proposed.First,the source images is characterized by different layers through the VGG16 pre-training model.Then,the L1-norm of the obtained feature map is calculated to estimate the activity coefficient of the pixel,and the choose-max principle is used to obtain the initial focus decision map.Then,the multi-layer fusion strategy is used to fuse the initial decision map with a certain weight to obtain the final fusion weight map.Finally,the original image is fused by a fusion weight map.Experiments show that the proposed method can better preserve the information of the original image compared to other classical fusion methods.2.Currently,most traditional image fusion algorithms require hand-craft features and require a lot of experience.In response to this situation,this paper proposes an image fusion algorithm for multi-level convolutional neural networks(MLFCNN).In the MLFCNN model,shallow learned features are passed down to the deeper layers.In the path between each of the front and back layers,we add a 1×1 convolution module to reduce redundancy.In our approach,the source image is first fed into our pre-trained MLFCNN model to obtain an initial focus map.Then,a morphological open-close operation and a Gaussian blur are performed on the initial focus map to obtain a final decision map.Finally,the fused image is weighted according to the decision map to obtain a fused image.The experimental results show that the proposed fusion algorithm achieves better performance than the existing fusion algorithm under both subjective and objective evaluation.
Keywords/Search Tags:Multi-focus image fusion, Convolutional neural network, Multi-level, Deep features
PDF Full Text Request
Related items