| Hyperspectral images contain rich spatial and spectral information,which helps to improve the effectiveness of image processing tasks such as target identification and target classification.However,due to the technical limitations of existing imaging equipment,hyperspectral images with high spatial resolution are difficult to obtain directly.To promote the development of hyperspectral image applications,algorithms that fuse low-resolution images of the same scene to acquire high spatial resolution hyperspectral images have emerged.Traditional image fusion methods use pre-defined priori information to characterize the mapping relationship from low resolution to high resolution,and their fusion results are often influenced by the prior information.As a result,the accuracy of the fused image will be low and unstable.The deep learning-based image fusion method can use the convolutional network to learn the prior information of the image directly and improve the fusion quality of the output result;meanwhile,adding the degradation priori information from high-resolution image to low-resolution image in the network for residual learning can further enhance the extraction of image information and accelerate the learning speed of the network.In this paper,we propose the hyperspectral and multispectral image deep fusion combined with learnable spatial-spectral degradation model for the problem of hyperspectral and multispectral image fusion,and the main research contents include the following three aspects.1.A hyperspectral and multispectral image fusion algorithm based on spatial spectral joint correction network is proposed.The method uses convolutional networks to construct spatialspectral degradation models and introduces the degradation models into the whole fusion network.Instead of requiring a mapping relationship between hyperspectral images and low spatial resolution or multispectral as priori information,the algorithm learns the spatial-spectral residual map using the error between the results obtained from the degradation model and multisource images,which is used to improve the accuracy of the hyperspectral images.And the output of the model can be returned to the network for several iterations of correction.Experimental results on public datasets show that the fusion results obtained by the proposed method in this paper take excellent performance in both visual effects and objective quality evaluation indexes.2.An unsupervised hyperspectral and multispectral image semi-blind fusion algorithm based on multiscale coupled residual networks is proposed.It is difficult to obtain high spatial resolution hyperspectral images as reference images for model training in practical application scenarios,thus exploring unsupervised fusion models is of profound significance.The method completes the degradation process of hyperspectral images using a spatial degradation model constructed by a convolutional network,and a known spectral response function to obtain spatial and spectral error maps.Then the feature information at different scales in the spatial and spectral residuals is extracted using convolution kernels of different sizes,and subsequently the feature information on the same scale is coupled to obtain the spatial-spectral residual map,which is used to reconstruct the hyperspectral image.The experimental results show that the fusion results obtained by the proposed method under unsupervised training method still effectively maintain the spatial structure and detail information of the hyperspectral images.3.Based on the above two proposed solutions,we designed and developed a visual processing software for hyperspectral and multispectral image fusion based on the application development framework of Qt,which includes file manage,hyperspectral image fusion and quality assessment modules,and its system framework and main interfaces are described in detail in the paper. |