For profit maximization,the model-based stock price prediction can give valuable guidance to the investors.However,due to the existence of the high noise in financial data,it is inevitable that the deep neural networks trained by the original data fail to predict the stock price accurately.To address the problem,the wavelet thresholddenoising method,which has been widely applied in signal denoising,is adopted to preprocess the training data.Experimental results show that,the data preprocessing with the soft/hard threshold method can obviously restrain noise.In this research,the main work is as follows: the method of using wavelet transform to preprocess the original stock data was first proposed,and the preprocessing data is applied to model training;The original wavelet de-noising method is improved,and the model performance of the improved wavelet method for data preprocessing is obviously enhanced;In addition,a new multi optimal combination wavelet transform(MOCWT)method is proposed.Experimental results show that this method has the best preprocessing effect on the original data.In this method,a novel threshold-denoising function is presented to reduce the degree of distortion in signal reconstruction.In the experiment of this paper,the data preprocessed by wavelet is used as the training data of the Long Short-Term Memory(LSTM)model to conduct training,so as to make stock price prediction.LSTM is a special kind of Recurrent Neural Network(RNN).Due to its unique "gate loop" structure,LSTM is more suitable for processing and predicting problems with long time interval and time series delay.As a typical nonlinear model,LSTM is generally regarded as a complex nonlinear element,which is used to construct more complex and larger deep neural network.In the training process of neural network model,the back propagation algorithm is mostly used to minimize the error of neural network to optimize the network.However,due to the multiplication mechanism in the process of parameter back-propagation optimization,the phenomenon of gradient disappearance or gradient explosion often occurs when the number of layers of the neural network is relatively deep.The "gate" mechanism of LSTM can effectively avoid this phenomenon.In this experiment,four kinds of different data are used for model training respectively.Raw stock data;The data preprocessed by the original wavelet method;The data preprocessed by the improved wavelet method;The data preprocessed by the newly proposed wavelet transform method.The prediction results of the trained model were compared respectively.In addition,the prediction effects of neural networks with different layers are also compared.Experimental results show that the improved wavelet transform method has better performance than the original wavelet transform method.Compared with the improved wavelet method and the original wavelet method,MOCWT has the best prediction accuracy. |