| Time series classification(TSC)widely exists in our social life,engineering field and medication area,which attracts a lot of research attention in recent years.Traditional TSC algorithms require additional expert experiences and manual adjustments,which cannot guarantee to be simple and universal.While methods such as deep learning are still lacking in practical applications because of the training costs and equipment support.The echo state network(ESN)is a special structure of a recurrent neural network(RNN)in which the recurrent neurons are randomly connected.ESN models are able to capture the rich dynamic features in time series by using the nonlinear reservoir,and have shown superior performance on time series prediction tasks.As the application of ESN models on TSC tasks has gradually increased in recent years,the way to build efficient ESN classification models has attracted the attention of many researchers.Nevertheless,in most ESN models,the input weights and reservoir weights are randomly generated,the structure of the reservoir is also randomly initialized and accompanied by poorly understood properties.Also,the output weight which needs to be trained is vulnerable to outliers,which does not guarantee that the ESN network is always optimal for a given task.To address the above problems,the main contributions of this dissertation are divided into two parts,optimizing the ESN’s reservoir structure(reservoir topology and reservoir size)and weights(input and output connection weights).The main contributions of this thesis are as shown as follows:(1)The initialization of the ESN reservoir is totally random,for this reason,its internal dynamic characteristics lead to a disadvantage that is difficult to be understood.To optimize the ESN structure,we propose a small world(SW)ESN with biased dropout(BD),that is BD-SWESN.SW algorithm uses the characteristics of SW network to construct multiple interconnected clusters to construct a stable reservoir,while the BD algorithm sets different pruning probabilities based on the reservoir output values and the contribution of neurons to the overall model performance,then goes through a redundant unit pruning process to obtain a more efficient and sparse ESN structure.Experimental results on synthetic datasets and 15 UCR(University of California,Riverside)open-source datasets show that SWESN can achieve better performance than the ESN model.Meanwhile,BD can further simplify the reservoir structure while improving the network performance.(2)The input weights are randomly generated during the initialization process of ESN,which makes it difficult to achieve the optimal performance.And because the traditional ESN output weights are trained by the least squares or ridge regression algorithm without considering the impact of errors,the output weights can still have great potentials.SWESN with graph regularized auto-encoder(GRAE)and outlier robust weight(ORW)is proposed to address the weight optimization problem of ESN.Firstly,the GRAE algorithm uses manifold regularization and AE framework to enhance the model’s ability to perceive local pattern features to generate input weights of ESN model that can capture the input features.Then the ORW algorithm constrains the noise-sensitive output weights by introducing training errors to enhance the robustness of the model.Experiments on bearing diagnostic dataset and 15 UCR data demonstrate that the classification performance of GRAE-ORWSWESN is comparable to other state-of-the-art classification algorithms and deep learning baseline algorithms.Finally,algorithm analysis and visualization validation further illustrate the effectiveness of the algorithm.(3)Constructing ESN models that optimize both structure and weights can overcome the widespread drawbacks in traditional ESN models,while bring efficient structures to the models.Based on the above two ESN models,this paper establishes a fused ESN with a global optimization of weights and reservoir(GOWR-ESN).GOWR-ESN addresses the drawbacks of feature loss and randomness while initializing a stable and efficient reservoir structure,while introduces training errors into the loss function to gain robust output weights.Experimental results on 50 UCR datasets show that the GOWR-ESN algorithm is able to achieve statistically insignificant differences in performance with current state-of-the-art integrated learning models and deep learning baseline models with less training cost. |