Font Size: a A A

Deep Fusion Network Learning For Multi-Source Remote Sensing Image Classification

Posted on:2020-03-05Degree:DoctorType:Dissertation
Country:ChinaCandidate:X LiuFull Text:PDF
GTID:1362330602450297Subject:Circuits and Systems
Abstract/Summary:PDF Full Text Request
Remote sensing image classification has always been a research hot spot in the field of remote sensing.It has a tremendous role in promoting the development of the national e-conomy.Remote sensing images have a wide variety of features and characteristics,and the overall characteristics of "three more" and"four high"."Three more" refers to multi-sensor,multi-platform,multi-angle,"four high" refers to high spatial resolution,high spectral reso-lution,high temporal resolution,high radiation resolution.These data characteristics bring new challenges to the emergence of new algorithms.Different complex data have different advantages.How to make full use of its advantages is a promising subject for remote sensing image interpretation,namely multi-source data fusion interpretation.According to the imaging mode,the remote sensing data is divided into active imaging data image and passive imaging data image.Because the same imaging mode,the data charac-teristics are relatively consistent.Active imaging data images include Synthetic Aperture Radar(SAR)images,polarimetric SAR images,and LiDAR data images.Passive imaging data images include panchromatic(PAN)images,RGB images,multispectral(MS)images,hyperspectral(HSI)images,and the like.Longitudinal analysis of data characteristics and interpretation advantages in the same imaging mode,horizontal integration of the advantages of different imaging modes,the combination of the two enhances the ability to interpret re-mote sensing images.In the work of this paper,the relationship between images is fully considered,and a reasonable and efficient model is constructed,which is applied to the clas-sification task of multisource remote sensing images,and has achieved good classification performance.These research results have also been fully affirmed by domestic and foreign peer experts.The specific contents are as follows:1.Aiming at the problem that the image structure information is destroyed in the traditional classification framework,a method based on deep convolutional neural network and matrix classifier for polarimetric SAR image classification is designed.This method overcomes the previous algorithm to make the two-dimensional feature space.The drawbacks of structural destruction preserve the spatial characteristics of the target and improve the classification accuracy of polarized SAR images.This method creatively introduces a support matrix machine in the deep convolutional neural network,and combines the two organically.A new classification framework is established for the classification problem of polarimetric SAR images.In order to verify the performance and robustness of the algorithm,we selected two commonly used polar SAR data sets for experiments.The experimental results are better than other comparison algorithms,which also verify that the spatial structure information of the captured image helps to distinguish some stubborn data points,thereby improving the discriminative ability of the model and improving the accuracy of classification.2.In order to extract and retain the most original polarimetric SAR data information,a polarization convolution image classification method based on polarization scattering cod-ing and full convolution network is designed,which is also called polarization convolution network.Polarization scatter coding preserves the structural information of the scattering matrix and avoids decomposing the matrix into one-dimensional vectors.Coincidental-ly,convolutional networks require two-dimensional input,where the polarization scattering coding matrix satisfies this condition.We designed an improved full convolution network to classify polarization-scattered data.In order to make the experiment more comprehensive and effective,the experimental data set consists of four sets of data from two satellites.The comparison algorithm includes traditional methods and the latest methods.The experimen-tal results show that the proposed algorithm has strong robustness and good classification effect.The classification graph is very close to the ground real marker map,and the clas-sification accuracy is higher than the comparison algorithm.The main reason is that the proposed algorithm preserves the structural information of the image in the original data.The experimental results also confirm the above inference.Through comparative experi-ments,we found that polarization scattering coding is indeed effective.For this type of coding,the performance of the designed classification network is better.3.Aiming at the cumbersome process of PAN image and MS image classification,a deep multi-instance learning model based on spatial spectral information fusion is designed to classify multi-spectral and panchromatic images.First,spectral features are extracted from MS images using a Stack Autoencoder Network(SAE),and spatial features are acquired from PAN image images using a deep convolutional neural network.The two features are then cascaded and imported into a converged network with three fully connected layers to fuse and learn high-level features.Finally,the features are classified and identified using the softmax classifier.From the visual analysis of the results,the classification results are very close to the ground truth map.This method has achieved satisfactory results.Therefore,by verifying the algorithm of this chapter on four data sets,it shows its strong robustness,indicating that the deep multi-instance learning framework proposed in this chapter can solve the task of fusion classification of PAN image and MS image.4.Considering multi-source image classification problem from multi-view perspective,a deep multi-view joint learning network is proposed for multi-source remote sensing data classification,including MS image,HSI image and LiDAR data.HSI images have rich spectral properties.LiDAR images can obtain height and intensity information.MS images have rich spectral properties and high resolution.In the proposed method,a canonical cor-relation analysis is used to obtain the associated features.The deep learning architecture is used to handle related functions.It is also recommended to use a view federation pool to fuse multiple view features.At the same time,we designed the experiment and gave the classification results of multi-source data.In the proposed classification criteria,spatial in-formation and spectral information are used to classify the images.Experiments show that this method performs better than the traditional method.In addition,the fusion classification of multi-source remote sensing data provides more reliable and applicable results.5.Image fusion classification usually involves three levels of abstraction,pixels,features,and decisions.Focusing on the strategy of feature level and decision level fusion,a new multi-source remote sensing data classification framework,namely HSI,LiDAR and VHR RGB data,is proposed.RGB images have extremely high spatial resolution.HSI images have rich spectral properties.LiDAR can obtain height and intensity information.How to comprehensively use these data to improve the image interpretation effect is a topic worthy of further study.The method is based on deep multi-level fusion and can take multiple layers into consideration.In the experiment,we designed the experiment and gave the fusion of different levels,indicating that both the information and the heterogeneous information were simultaneously used to classify the image.The methods presented in this chapter perform better than traditional single-level fusion methods in finding high-quality semantics from multi-source images.In addition,the fusion classification of multi-source remote sensing data provides more reliable and applicable results.In summary,this thesis systematically studies the classification of remote sensing images,including the classification problem of polarimetric SAR images and the classification of LiDAR data images in active imaging mode,and also includes the full-color image in passive imaging mode.MS image,HSI image classification problem.Finally,the images in the two imaging modes are fused and classified,and good classification performance is achieved.
Keywords/Search Tags:Remote sensing image interpretation, PolSAR image, fusion classification, deep learning, convolutional neural network
PDF Full Text Request
Related items