Multi-focus image fusion is a technology that synthesizes multiple images from different focal points into a full-focus depth image to improve image clarity and image details,thereby improving the accuracy and efficiency of image processing.In the fields of machine vision,medical imaging,and remote sensing,multi-focus image fusion has important application value.Through multi-focus image fusion,problems such as insufficient depth of field,image distortion,and information loss can be effectively solved,image quality can be improved,and more accurate and reliable data support can be provided for subsequent analysis and decision-making.In recent years,there have been many researches on multi-focus image fusion at home and abroad,and many multi-focus image fusion models have been proposed.However,due to problems such as easy loss of key information when extracting features from multi-focus images,too complicated focus level measurement,and few real reference images in multi-focus image datasets,it is necessary to improve multi-focus image fusion methods.further improvement.Aiming at the above problems,this paper studies the multi-focus image fusion method based on deep learning.The specific content includes:(1)In order to solve the problems of loss of high-frequency texture information in multi-focus images,weak ability of model segmentation images to focus on defocused areas,gradient disappearance,and unsatisfactory fusion image quality,a multi-focus image fusion method based on texture enhancement—MFIF-GAN is proposed.Firstly,a texture enhancement module is designed to extract the deep texture features of the image to solve the problem of loss of high-frequency texture information of the image.Secondly,a dense convolutional neural network is used to improve the generator,improve the utilization efficiency of features,and alleviate the problem of gradient disappearance.Then,the hybrid attention mechanism is used to enhance the model’s learning of the focus area,and solve the problem of the weak ability of the model to segment the focus and defocus areas of the image.Finally,the training mode of generative confrontation network is introduced,and the real label and the fusion image output by the generator are input into the discriminator network.The discriminator network assists MFIF-GAN to generate more realistic images by learning the difference in texture structure between the two.(2)In order to solve the problems such as loss of key information in the process of image feature fusion,blurred parts of the image,and few real reference images in the multi-focus image dataset,a multi-focus image fusion method based on feature fusion attention—FFA-Fusion-GAN is proposed.First,channel attention and spatial attention are combined,and key features are given more weight during feature fusion to solve the problem of key information loss.Secondly,a guided module with a guided filter is used to generate a weight map based on the principle of repeated blurring,and a constraint generator is used to generate a full-scene focused image to solve the problem of partial area blurring of the fused image.Then,a method for simulating the generation of multi-focus image pairs is proposed,using a Gaussian blur filter to blur the original clear image to form a new multi-focus image pair,which solves the problem of lack of real reference images in the multi-focus image dataset.Finally,based on the idea of game theory,an adversarial learning between the discriminator and the generator is established,and an adversarial loss function is proposed to further improve the quality of the fusion image. |