| Time series is a set of serial data generated and recorded in temporal order.Time series are continuously generated in human activities and nature at any time and any place,which makes time series analysis become an important research topic in the field of data mining.Deep learning has become a widely used method in the field of time series analysis and has produced great performance on a variety of tasks.However,the widespread class-imbalance phenomenon in time series datasets and the lack of labeled data limit the performance of deep learning models,as deep learning is a data-driven approach.To address the above issues,this dissertation uses the self-supervised learning strategy to construct pretext tasks for training models.Based on the proposed time series data augmentation method which utilizes the siamese encoder,this dissertation proposes a time series representation method based on siamese feature extractor,which enables the feature extractor to extract high-quality time series representations without using label information and alleviate the difficulty of data analysis for downstream tasks.Firstly,this dissertation proposes a self-supervised learning method based on siamese encoder,according to the widespread phase shifts and amplitude changes phenomenon in time series.By minimizing the dynamic time warping distance between time series and the Euclidean distance between the embedding vectors obtained from the siamese encoder,this method enables the movement of the embedding vectors in the deep euclidean feature space to produce phase shifts and amplitude changes and matches the deep euclidean fe ature space to the dynamic time warping.Secondly,this dissertation proposes two time series data augmentation methods on the time series embedding vectors.The first data augmentation method is based on adding random noise to the embedding vector,which causes phase shifts and amplitude changes of the corresponding time series and avoids the corruption of the time series structure and feature information.The second data augmentation method is based on random linear interpolation,which first uses dynamic time warping to find the nearest neighbor samples of the time series sample to be interpolated,and then performs random linear interpolation between the embedding vector of the sample to be interpolated and the embedding vectors of the nearest neighbor samples.This method makes the process of finding nearest neighbors and random linear interpolation more suitable to the properties of time series data.Finally,this dissertation utilizes the proposed time series data augmentation method to design a self-supervised contrastive learning framework that enables the feature extractor to extract time series representations in an end-to-end manner.The framework first transforms a time series sample into two different augment views through two separate data augmentation operations.Next,the framework utilizes the consistent structure and feature information from the two augment views to enable the feature extractor to generate high-quality representation vectors by minimizing the similarity between the represent ation vectors of the augment views.To verify the effectiveness and reasonableness of the proposed methods,this dissertation conducts validation experiments on multiple datasets.The experimental results show that both data augmentation methods proposed in this dissertation can effectively improve the size and quality of the dataset,which not only compensate the impact of the class-imbalance in the time series dataset on the model,but also can effectively alleviate the problem of insufficient labeled data.The representation method can not only achieves comparable performance to the current state-of-the-art deep learning-based supervised model on multiple tasks,but also produces better performance than supervised model when the labeled data is insufficient.In addition,the results of the multi-domain application validation experiments show that the proposed representation method can be flexibly applied to time series data from different domains. |