Font Size: a A A

Research On Multimodal Time Series Anomaly Detection

Posted on:2022-12-06Degree:MasterType:Thesis
Country:ChinaCandidate:C Y DingFull Text:PDF
GTID:2480306776992809Subject:Insurance
Abstract/Summary:PDF Full Text Request
Time series anomaly detection aims to identify anomalous patterns from time series data.Time series anomaly detection has been an important research area for a long time.As the number of modalities in a time series grows,the complexity of the time series and the difficulty of anomaly detection gradually increase.In this paper,three time series anomaly detection frameworks are proposed for three data types with different complexity levels,i.e.,single modality,two modalities,and multiple modalities(greater than or equal to three modalities),to effectively leverage the information in different types of data.For the concept drift problem of time series on a single modal dataset,this paper proposes an online Transformer model based on concept drift detection.On two modal datasets,this paper explores anomaly detection methods for anomalous samples based on multimodal adversarial attacks on speech and text.On a multimodal dataset,this paper designs a multimodal spatial-temporal graph attention network,which employs a multimodal graph attention network and a temporal convolutional network to capture the spatial-temporal correlation in multimodal time series.First,to solve the concept drift problem in single modal time series,we propose a Transformer anomaly detection model that combines a concept drift detection module(CDAM)and online learning.The CDAM module is responsible for dynamically adjusting the learning rate of the model.The CDAM and online learning together facilitate the transfer of knowledge from the model of old concept data to the model of new concept data by an online sparse Transformer.In addition,owing to the high time complexity of self-attention in the Transformer,we design root-squared sparse self-attention to replace the standard self-attention,which greatly reduces its computational complexity.Second,to better mine the information in the two modal data,we propose a multimodal deep fusion Transformer,termed MDFT.Specifically,audio and text features are extracted by audio and text encoders,respectively.We design the multimodal attention mechanism to capture the complementary information between audio and speech features and obtain a joint multimodal representation.This representation is then propagated to the dense layer to produce detection results.Finally,to explicitly capture the spatial-temporal relationships between univariate time series of multiple modalities.We propose a multimodal spatial-temporal graph attention network(MST-GAT).MST-GAT first employs a multimodal graph attention network(M-GAT)and a temporal convolutional network to capture the spatial-temporal correlations in multimodal time series.Specifically,M-GAT uses a multi-head attention module and two relational attention modules(i.e.,intra-modal attention and inter-modal attention)to explicitly model modal correlations.Furthermore,MST-GAT uses reconstruction and prediction modules to jointly optimize the model parameters.
Keywords/Search Tags:multimodal learning, anomaly detection, time series, graph attention networks, Transformer
PDF Full Text Request
Related items