Hyperspectral images have dozens or hundreds of spectral bands.Compared with RGB images which have only three spectral bands,hyperspectral images contain richer spectral information that reveals the characteristics of objects hidden in the spectral domain.Hyperspectral images are used in a wide range of applications,including mineral identification,agricultural monitoring,environmental monitoring,and military applications,as well as tasks in computer vision such as image inpainting,object tracking,image super-resolution,and face recognition.However,there are always various interferences in the process of image acquisition,such as noise.Hyperspectral image applications require high image quality.Therefore,as an important image preprocessing technique,hyperspectral image denoising is widely used in industrial and agricultural fields and advanced computer vision tasks.Overcomplete dictionary with sparse representation is a hotspot in computer vision.However,the sparse dictionary learning algorithms for traditional 2D images have poor performance in hyperspectral images denoising.As the amount of data grows,tensor representation and adaptive learning have developed rapidly.Tensor decomposition is an efficient way to represent high-dimensional data with low rank and preserve the geometric structure of tensors.By studying the existing denoising methods and the properties of hyperspectral images,this paper determines the idea of combining tensor decomposition and dictionary learning theory and solving the parameters of the adaptive inference model.Tensor decomposition mainly includes two methods:Tucker decomposition and CP decomposition.In this paper,a nonparametric Bayesian tensor dictionary learning denoising algorithm based on Tucker decomposition and a nonparametric Bayesian tensor decomposition denoising algorithm based on CP decomposition are proposed,and denoising simulation experiments are carried out by superimposing noise on real hyperspectral image.The experimental results show that the two denoising algorithms proposed in this paper have different improvement for performance compared with other algorithms.Compared with Tucker decomposition algorithm,the algorithm based on CP decomposition explores the correlation between dictionary factors,and further improves the accuracy of noise inference and the quality of reconstructed image.Finally,the effect of image reconstruction is better than other algorithms in various noise environments,which proves the effectiveness of the algorithm. |