People are exposed to a large number of digital images every day.Due to the limitation of the imaging principle of visible light sensor,a single shot can only form a clear image in the depth of field,while other parts of the image are often blurred.In order to optimize the visual effect of the visible image,the multi focus image fusion technology can be used to fuse the images with different focus areas,retain the rich information of the focus area,remove the redundant information of the defocus area,and obtain a highquality image with clear panorama.In image processing,multi-dimensional scaling technology is considered to simulate the image analysis ability of human vision.Image fusion based on multi-scale transformation often extracts and analyzes diversified depth features by decomposing the image into decomposition coefficients of different scales and directions,so as to design fusion rules according to the characteristics of decomposition coefficients.This fusion strategy can improve the overall visual effect of the image,but it is easy to retain redundant information or lose detail information because it does not accurately distinguish the focus area.Deep learning neural network can adaptively fit the feature extraction process and segment the focused and defocused regions relatively accurately.Combining deep learning with multi-scale transformation can obtain more accurate and rich fusion images.This paper studies the characteristics of the above two kinds of image fusion methods,and proposes two multi focus image fusion algorithms based on nonsubsampled shearlet transform(NSST)and generative adaptive networks(GAN).The main work and innovative achievements are as follows:(1)In order to improve the fusion image quality and enrich the image detail information,a multi-focus image fusion method using Gan network in NSST domain is proposed in this thesis.In this method,the multi-scale decomposition of the source image is carried out through NSST to obtain the low-frequency and multi-scale high-frequency decomposition coefficients of the image.Then,under the guidance of generating the decision diagram of network output,the low-frequency and high-frequency fusion rules in NSST domain are formulated respectively to guide the coefficient fusion of each layer.Finally,the fused image is reconstructed through NSST inverse transformation.The experimental results show that this method can effectively suppress the image block effect and artifact,fully aggregate the focus information of the source image,and make the fused image have rich detail features and excellent visual effect.(2)In reality,multi-focus images often have slight misregistration,which is easy to lead to artifacts and dislocation of fused images.In addition,the traditional fusion methods based on single-scale splicing rules are often affected by defocus spread effect(DSE).To solve these problems,this paper proposes a method of rough fusion based on NSST and secondary fusion based on GAN network.Firstly,the decomposition coefficients of the source image are fused using traditional methods in the NSST domain.After being reconstructed into a coarse fusion image through the inverse transformation of NSST,the high-definition pixels of the source image are replaced into the coarse fusion image by using the decision map generated by the depth network,and the risk of DSE is avoided in the whole region.In addition,this paper adjusts the structure of GAN and introduces SE-Dense block,which combined with channel attention mechanism,which improves the network classification ability and reduces the network scale. |