Font Size: a A A

Long Time Series Power Load Forecasting Based On Sparse Self-Attention Mechanism

Posted on:2023-07-26Degree:MasterType:Thesis
Country:ChinaCandidate:C W ZhanFull Text:PDF
GTID:2568306800952009Subject:Electrical engineering
Abstract/Summary:PDF Full Text Request
Accurate and scientific power load forecasting is the premise of scientifically formulating dispatching scheme and power generation plan.Long time series power load forecasting is a new demand to deal with more complex situations under the background of modern power development.Because of its strong nonlinear fitting ability,neural network has achieved excellent results in the field of power load forecasting,and has gradually become the mainstream research direction of scholars in recent years.However,most power load forecasting models based on neural network are prone to error accumulation and information loss in long time series power load forecasting,It makes it difficult for the model to capture the long-range law in power load data,compared with the common short-term and ultra short-term power load forecasting,long-time series power load forecasting has the problems of high cost of computer resources and low training efficiency.In this paper,the model based on self-attention mechanism is studied in the task of long time series power load forecasting.The main work is as follows:1.Summarizes the common power load forecasting models,analyzes their limitations in long-time series power load forecasting,then introduces the attention mechanism,and verifies the effectiveness of the attention mechanism in power load forecasting through the data set.Then it leads to the transformer model which completely uses the self-attention mechanism.It is concluded that the advantage of distance 1 between transformer model data can effectively reduce the loss of long-distance information and the accumulation of error.It provides theoretical support for the optimized transformer model to be used in long time series power load forecasting2.Adapts the input of transformer model in power load forecasting task.Aiming at the problem of the square growth of memory overhead in the process of calculating self-attention weight of transformer model,reduces the complexity of each layer fromΟ(~2)toΟ(ln),which optimizes the memory overhead of the self attention weight matrix.Aiming at the problem that the transformer model encoder and decoder are not suitable for long time series power load forecasting,this paper optimizes the feature map generation mode of the model encoder and the output structure of the decoder,reduces the memory overhead of the encoder and improves the output speed of the decoder.3.Explores the optimal solution of the main parameters in the sparse self-attention model through experiments,and compares the optimized model with several other common power load forecasting models using the data set of Nanchang Taoyuan substation.The measured data results show that compared with the cyclic neural network model,the model based on sparse self-attention mechanism has obvious advantages in prediction accuracy and training efficiency.It provides a real case reference under a new solution for solving the problem of long-time series power load forecasting.
Keywords/Search Tags:long time series power load forecasting, self-attention mechanism, transformer, sparsity
PDF Full Text Request
Related items