Font Size: a A A

Multi-focus Image Fusion Based On Orthogonal Polynomial Transform

Posted on:2020-10-23Degree:MasterType:Thesis
Country:ChinaCandidate:L ZhouFull Text:PDF
GTID:2428330590971776Subject:Computer technology
Abstract/Summary:PDF Full Text Request
With the continuous progress and development of modern society,people have higher and higher requirements for images.High-resolution images have become the pursuit of people.The images taken by ordinary cameras have gradually been unable to meet people's daily needs.Even high-power optical lenses still have the problem of depth of field.The larger the focal length and magnification of lenses,the smaller the depth of field.Objects outside the depth of field are blurred.Image fusion technology is the best solution to this problem.Multi-focus image fusion is an important research area of image fusion.Multi-focus image fusion is the fusion of multiple images of the same scene with different focus into a completely clear or focused image.This thesis uses the advantages of orthogonal polynomial transformation and combine orthogonal polynomial transformation and multi-focus image fusion.The detailed research points are listed as follows:Firstly,a multi-focus image fusion method based on Discrete Tchebichef Transform(Discrete Tchebichef Transform,DTT)and focus measure is proposed.Firstly,the properties of discrete Chebyshev polynomial transform are analyzed,and the relationship between them and correlation analysis is established.Then,the low-order discrete Chebyshev transform coefficients are calculated to obtain the size of the focus measure values of the corresponding image blocks.Then,the focus measure values of the corresponding image blocks are compared.Finally,the image blocks with larger focus measure values are selected as the fused image.By comparing the proposed method with some classical multi-focus image fusion methods,it is found that in subjective effect,the fused image obtained by the proposed method is relatively clear,and the time complexity of the proposed method is low,and it is robust to noise.Secondly,this thesis proposes a Discrete Tchebichef Transform-based Neural Network(Discrete Tchebichef Transform-based neural Network,DTTNet),which can be learned to classify the focused pixel,defocused pixel and uncertain pixel in the source images,for multifocus image fusion.The proposed DTTNet is an end-to-end deep neural network,and simply has one convolution layer and three full connected layers.The filter banks of convolution layer in the proposed DTTNet are fixed by DTT kernel functions,while the weights of the three full connected layers are learned from training data.Compared with the traditional handcrafted focus measure,the focus measure of the proposed method is obtained by learning.Experimental results demonstrate that the proposed method is competitive with or even outperform the stateof-the-art multi-focus image fusion methods in terms of both subjective visual perception and objective evaluation metrics.
Keywords/Search Tags:Discrete Tchebichef Transform, focus measure, multi-focus image fusion, convolutional neural network, deep learning
PDF Full Text Request
Related items