| Anomaly detection is the technique to find or recognize significantly different samples from normal samples.In practical applications,anomalies can be defects and malfunctions in the system,anomalous events in our daily lives,targets of interest,etc.In hyperspectral image based remote sensing,the fine-grained spectral features of hyperspectral images can provide more detailed descriptions for varied materials,facilitating the detection and classification performance of land covers.Anomaly detection based on hyperspectral images has critical and extensive potential applications in many fields,such as military,agriculture,and mining.Therefore,the studies on the techniques of hyperspectral anomaly detection that push these techniques into practical applications are the top priority for meeting the needs of practical applications.As for the problem of anomaly detection,we start from the anomaly detection methods based on traditional machine learning strategy and end up with the novel detectors based on deep learning.The practical problems focused on in this thesis including g insufficient detection accuracy,weak generalization ability,and uncontrollable detection results of the current anomaly detection algorithm.By combing the effective ideas of traditional machine learning strategies and the strong representation and generalization ability of deep features,we aim to build deep detectors to improve detection accuracy,generalizability,and controllability.The main contents and contributions of this work are given as follows:(1)In hyperspectral anomaly detection,the prior knowledge of background and anomalies is missing.Only depending on modeling,the background of hyperspectral images cannot effectively separate anomalies from the background.Building upon low-rank and sparse representation,we simultaneously aim to model the background and the anomalies.By carefully designing the background dictionary and the potential anomaly dictionary,more prior knowledge of the background and the anomalies are introduced into the low-rank and sparse representation based decomposition model,so that background,anomaly,and noise can be effectively separated.In the background and potential anomaly dictionary construction,the dictionary atoms selection strategy based on joint sparse representation is proposed.Different backgrounds are represented with their corresponding class-specific sub-dictionary by joint sparser representation to cover all kinds of background.Elements of the background dictionary are the elements of each sub-dictionary that frequently join into the reconstruction task.The elements of the potential anomaly dictionary are the samples with top high reconstructions errors.Extensive experiments demonstrate that anomaly detection accuracy can be effectively improved with the help of the proposed background and potential anomaly dictionary construction strategy.(2)In building deep detectors for hyperspectral anomaly detection,the model cannot be learned based on supervised learning due to the unavailable background and the anomaly labels.For unsupervised deep models,more precisely,the deep reconstruction models,the over reconstruction ability of these deep models can not only reconstruct the background but also some of the anomalies.As a result,the learned detector is not sensitive to the anomaly.To solve this problem,this work proposes a deep detector based on a clusterbased memory module augmented autoencoder and optimal transportation.Inspired by the background dictionary in the traditional detectors.We propose a memory module,which is exploited to store the background prototype.The optimal transportation strategy generates pseudo labels for end-to-end deep clustering.The obtained cluster centers act as memory prototypes stored in the memory module.The proposed model can facilitate the consistency of background features and enhance the discrimination ability between the background and the anomaly so that the anomaly can be more effectively detected.Extensive experiments on real hyperspectral images demonstrate that the proposed model can achieve better detection performance.(3)In a deep anomaly detection algorithm,the pixel-wise deep reconstruction model often degenerates and may be unstable when facing a complex dataset.The detection performance largely depends on the reconstruction ability of the model.Once the model cannot effectively and pixel-wisely reconstruct current images,the detector is no longer reliable.This work aims to design a deep detector do not need to reconstruct samples,named the anomaly detection based on contrastive learning and cluster-based memory module.The contrastive learning strategy can effectively learn data representations without sample annotations.This self-supervised feature learning procedure can provide instance-level discriminative features for subsequent memory prototype updates utilizing contrastive learning based deep clustering.The newly designed module is equipped with another forgetting operation that can suppress the memory prototypes’ expression for the anomalies.Therefore the model can more effectively learn consistent features for the background so that the anomaly is detected in the feature space without pixel-wise reconstruction.The algorithm mitigates the defects of high computational complexity,heavy burden,and instability brought by pixel-level reconstruction tasks.Comprehensive experiments on vision datasets demonstrate the effectiveness of the method.(4)One of the significant defects of the existing hyperspectral anomaly detection algorithm based on deep learning is that the model must be learned repeatedly for each specific hyperspectral image.It is impractical because training a deep model often includes major adjustments to the model’s coefficients and is a huge time waster.Aiming to solve this problem,this work proposes a unified deep detector based on relation and few-shot learning for multiple hyperspectral images anomaly detection.Considering the image size,the image content and imaging characteristics can be varied vastly for different hyperspectral images.This work tries to directly model the relations between the backgrounds and between the anomaly and the background,which can be shared among different hyperspectral images.The vector of locally aggregated descriptors pooling strategy is proposed to map the pixels of different sizes or from different images into the same space.The model is built by a series of subtasks trained to mimic the case of detecting anomaly across different hyperspectral images so that the model can learn strong generation ability and can easily extend to different hyperspectral images.A memory model is built by a transformer to integrate local context information for central pixel estimation,which is utilized to determine whether the central pixel is an anomaly.Experiments show that the model can achieve anomaly detection in hyperspectral images under different scenes,sizes,and imaging sensors after being trained on simulated datasets.(5)In hyperspectral anomaly detection based on deep learning,the common strategy is to learn a deep model with the help of an auxiliary task of deep reconstruction.However,when training a deep model,minimizing the reconstruction error is not fully consistent with the anomaly detection objective,which maximizes the detection accuracy and minimizes the false alarm rate simultaneously.To solve this problem,this work propose hyperspectral anomaly detection method based on Neyman-Pearson lemma and generative adversarial network.The model can be trained to maximize the detection accuracy for a given false alarm rate by building a new objective function based on the Neyman-Pearson lemma.In addition,the detection performance can be controlled by changing the given false alarm rate.To optimize this new objective function,a large number of anomaly pixels are needed.An anomaly pixels generator is proposed based on a generative adversarial network to solve this problem.Experiments verify the detection performance of the method and the controllability of anomaly detection performance. |