Abstract

The rapid development of edge computing drives the rapid development of stock market prediction service in terminal equipment. However, the traditional prediction service algorithm is not applicable in terms of stability and efficiency. In view of this challenge, an improved Elman neural network is proposed in this paper. Elman neural network is a typical dynamic recurrent neural network that can be used to provide the stock price prediction service. First, the prediction model parameters and build process are analysed in detail. Then, the historical data of the closing price of Shanghai composite index and the opening price of Shenzhen composite index are collected for training and testing, so as to predict the prices of the next trading day. Finally, the experiment results validate that it is effective to predict the short-term future stock price by using the improved Elman neural network model.

1. Introduction

The stock market can be regarded as a complex nonlinear system, and there are many factors that affect the stock price, especially the recent historical stock price, which has a great influence on the future short-term stock price. So, it is difficult, but valuable, to provide stock price prediction service. Fortunately, with the development of edge computing and neural network technologies, commercial service providers can benefit from the low-latency edge resources and nonlinear expression ability of neural network to provide their users with more efficient service acquisition for stock price prediction. Based on this well-known cognition, we can design a neural network to predict the stock price of the next period based on the historical stock price [14]. In this paper, we will use the historical data of the closing price of the Shanghai composite index to predict the closing price of the Shanghai composite index in the next trading day, and the historical data of the opening price of the Shenzhen composite index to predict the opening price of the Shenzhen composite index in the next trading day. In addition, our research could make stock prediction algorithms deployed on edge terminals more efficient.

Over the years, although many scholars have established a large number of mathematical models to predict the stock price, they have not achieved good results and have little impact. However, the rise of big data technology and artificial intelligence technology will provide another effective solution for stock price prediction. This is the motivation of our research. Specifically, we hope to establish a reasonable artificial intelligence model and make a more accurate prediction of the future short-term stock price by inputting the latest stock price history. We expect this model can offer a reference for people of stock investment.

In this paper, we propose an improved Elman neural network model to predict stock price, and our main contributions include the following:In order to apply traditional stock prediction algorithms to terminal devices such as edge computing and mobile phones, we build a stock price prediction model based on an improved Elman network with the aim to predict the stock price simpler and more stable. We give the specific model parameters and build process.In order to reflect the latest stock market situation, we trained and tested the proposed model with the latest dataset, namely, the Shanghai composite index and Shenzhen composite index in 2018, 2019, and 2020; the latest datasets are used to better reflect the current stock market.To analyse the new algorithm model more clearly, we quantitatively analysed the performance of the model with a variety of mathematical tools and error analysis methods. In addition, a large number of diagrams and tables are provided to further clarify the model.

The rest of this paper is organized as follows. Section 2 reviews and summarizes the related work, on this basis, to clarify the significance of this study. Section 3 is preliminaries in which the principle of Elman neural network is clarified. In Section 4, we proposed our model, and the specific model construction procedures are introduced in detail. Section 5 is experiments in which the model is built, trained, and tested. In addition, we devoted a great deal of space to the analysis of the results in this section. Finally, Section 6 concludes this paper.

2. Preliminaries

Elman neural network is a typical feedback neural network model widely used, which is generally divided into four layers: input layer, hidden layer, bearing layer, and output layer [5, 6].

Figure 1 shows the basic structure of an Elman neural network; the connection of the input layer, hidden layer, and output layer are similar to the feedforward network. The input layer unit only plays the role of signal transmission, and the output layer unit plays the role of weighting. There are linear and nonlinear excitation functions in the hidden layer element, and the excitation function usually takes the nonlinear function of Sigmoid. The bearing layer is used to remember the output value of the hidden layer unit at the previous time, which can be considered as a delay operator with one-step delay. The output of the hidden layer is self-linked to the input of the hidden layer by accepting the delay and storage of the layer, which makes it sensitive to historical data. In other words, Elman neural network adds a bearing layer to the hidden layer as a one-step delay operator to achieve the purpose of memory so that the system has the ability to adapt to time-varying characteristics and enhance the global stability of the network [79]. The mathematical expression of its network iswhere is the output node vector, is the nodal element vector of the hidden layer, is the input vector, is the feedback state vector, is the connection weight from the hidden layer to the output layer, is the connection weight from the input layer to the hidden layer, is the connection weight of the connecting layer to the hidden layer, is the transfer function of the output neuron and the linear combination of the output of the hidden layer, and is the transfer function of the hidden layer neuron, usually using the function.

This research takes MATLAB as the experimental platform. And the two datasets used in this study are the closing prices of 490 trading days of the Shanghai composite index from September 26, 2017, to September 30, 2019, and the opening prices of 420 trading days from August 15, 2018, to May 12, 2020. We will use the same model for training and testing based on these two datasets.

In fact, many researchers have been studying stock price forecasts for years, some of these studies have improved the existing models and some have further processed the data. However, these studies are not perfect, and some of the models are too complex and some of the processing procedures are tedious. These shortcomings will increase the instability of the models and limit the application and extension of the research results.

Shi et al. considered that traditional stock forecasting methods could not fit and analyse highly nonlinear and multifactors of stock market well, so there are problems such as low prediction accuracy and slow training speed. Therefore, they proposed a prediction method of the Elman neural network model based on the principal component analysis method. In order to compare the results better, BP network and Elman network with the same structure are established to predict the stock data [10]. Yu et al. used an improved Elman neural network as the forecasting model and the market price of Zhongji company (No. 000039) in Shenzhen stock market is forecasted; their experiment results get higher precision, steadier forecasting effect, and more rapid convergence speed [11]. Zheng et al. studied the forecast of opening stock price based on Elman neural network in 2015, and they selected the opening prices of Shanghai stock index of 337 trading days from December, 2012 to April, 2014 as the raw data for stimulated forecast, and the result proves the validity of their forecast model [12]. Zhang et al. successfully applied Elman regression neural network to the prediction of stock opening price. Specifically, the authors described the Particle Swarm Optimization (PSO) algorithm for learning optimization of the Elman Recurrent Neural Network, and the results showed that the model based on LSTM was more accurate than other machine learning models [13]. Jun used Adaptive Whale Optimization Algorithm and Elman neural network to predict the stock price and achieved better results based on their experiments [14]. Javad Zahedi et al. used the artificial neural network model and principal component analysis to evaluate the predictability of stock price in Teheran stock exchange with 20 accounting variables. Finally, the goodness of fit of principal component analysis was determined by actual values, and the effective factors of Teheran stock exchange price were accurately predicted and modelled by a new model composed of all variables [15]. Han et al. designed a three-ply BP network and the corresponding mathematical model. Therefore, using 140 days actual price of the stock 600688 as a sample, the network was trained through MATLAB; thereby, the 10 days predictions of the stock price and the dispersion Q = 0. 0146 to the practical data were made [16].

Although scholars have made outstanding contributions in using artificial intelligence to predict stock prices, neither the stability of the models nor the accuracy of the predictions is satisfactory. Based on this fact, this study seeks to exploit the neural network model for the prediction of stock price based on Elman network with balancing the simplicity, stability, and accuracy.

4. Supposed Model

The general steps to build the supposed model of this study include data collection, data load, sample set construction, division of sample set and training set, construction of Elman neural network, and training of the neural network model. The specific flow chart is shown in Figure 2.

4.1. Construction of Sample Set

The stock price prediction problem in this study is actually a time series problem, which can be expressed by the following formula:

This formula means that the closing price of the previous N trading day can be used to predict the closing price of the next trading day. The data of 490 closing prices were divided into training samples and test samples; for the training samples, are selected to form the first sample, where are the independent variable and is the dependent variable, and are selected to form the second sample, where are the independent variable and is the dependent variable; finally, a training matrix is formed as follows:

In this matrix, each column is a sample, and the last row is the expected output. These samples are fed into the Elman neural network for training, and then the network model can be obtained [1719].

In this study, are selected to form the first sample, and are selected to form the second sample; the rest can be carried out in the same manner. Here, N is randomly set to 8, which means that the closing price of the day is determined by the closing price of the previous seven trading days.

Take the Shanghai composite index dataset as an example, the closing prices of the first eight trading days are 3,343.58, 3345.27, 3339.64, 3348.94, 3374.38, 3382.99, 3388.28, and 3386.10, which means 3,343.58, 3345.27,3339.64, 3348.94, 3374.38, 3382.99, and 3388.28 will be used to forecast the eighth data 3388.28 which we have already obtained. The closing prices of the first eight trading days are 2999.28, 3006.45, 2977.08, 2985.34, 2955.43, 2929.09, 2932.17, and 2905.19, and the same principle, 2999.28, 3006.45, 2977.08, 2985.34, 2955.43, 2929.09, and 2932.17, will be used to forecast the eighth data 2905.19 which we have already obtained. Therefore, 490 pieces of data will be converted into a matrix; 483 columns mean 483 samples, in which the first 7 data in each column are independent variables and the eighth data is the data to be predicted. The matrix is shown as follows:

The Shenzhen composite index dataset is 8786.3497, 8470.9094, 8573.5693, 8355.0002, 8419.7868, 8533.4289, 8446.9836, 8480.2244, 8511.3743, 8731.6394, 8716.8172, 8666.9025, 8509.2723, 8440.9528, 8454.1357, 8519.5698, ⋯⋯,10477.7614, 10460.9947, 10575.5242, 10618.1651, 10899.9169, 10923.6123, 11053.8157, 10972.0503. In the same way, the Shenzhen composite index dataset is formed as a matrix, which is as follows:

413 columns mean 413 samples, in which the first 7 data in each column are independent variables and the eighth data is the data to be predicted.

4.2. Construction of Elman Neural Network

Figure 3 shows the proposed model structure, where are input data, are hidden-layer data, and are bearing-layer data. With the help of MATLAB neural network toolbox, Elman neural network can be easily built. To be specific, the MATLAB neural network toolbox provides an Elmannet function, and Elman network construction can be completed by setting three parameters in the Elmannet function, which are the delay time, the number of hidden layer neurons, and the training function, respectively. In this case, the number of hidden-layer neurons is set to be 18, and TRAINGDX is chosen to be the training function [2022]. TRAINGDX, which is named gradient descent with momentum and adaptive learning rate backpropagation, is a network training function that updates weight and bias values according to gradient descent momentum and an adaptive learning rate. It will return a trained net and the training record. In addition, the maximum number of iterations in the training is set to 3000, the maximum number of validation failures is set to 6, and the error tolerance is set to 0.00001, which means that the training can be stopped if the error value is reached [23, 24]. Figure 4 shows the model structure graphic automatically generated by MATLAB.

To construct the Elman neural network, the MATLAB code can be like this. Firstly, three parameters in the Elmannet function are set, and codes are as follows:

Secondly, the maximum number of iterations in the training is set to 3000, and codes are as follows:

Thirdly, the maximum number of validation failures is set to 6, and the error tolerance is set to 0.00001, and codes are as follows:

Finally, initialize the network, and codes are as follows:

After all the above steps, construction of Elman neural network is completed [2527].

5. Experiments

5.1. Training of the Supposed Model

When the Elman neural network is built, the model can be trained, but all the data has to be normalized first considering of the performance and stability of the model. The normalization operation can use the mapminmax function provided by MATLAB toolbox, and the default normalization interval of mapminmax function is [−1, 1]. The detailed MATLAB code is as follows:

After the normalized operation of training data, trainx and trainx1 were obtained. The normalized training data (trainx1) were input into the network model to obtain the current network output (train_ty1) and then reversely normalized into normal data to obtain train_ty, which is the corresponding stock price of the training data. What we want to emphasize is that the data used in the test should be normalized first and then the output should be unnormalized.

5.2. The Test Results and the Quantitative Analysis

Figure 5 shows a graph of the actual and predicted values; the blue solid line is the actual value and the red dotted line represents the Elman network output value. Apparently, the model fits the training data well. In addition, we further calculated the residuals of test results on training; Figure 6 shows the residuals of training results on training data, and residual in mathematical statistics refers to the difference between the actual observed value and the estimated value (the fitting value).

Figure 7 shows a graph of the actual and predicted values; the black solid line is the actual value and the red dotted line represents the Elman network output value. In addition, we further calculate the residuals of test results on testing data; Figure 8 shows the residuals of test results on testing data. And, the relative errors of each prediction are also calculated for further study and analysis. All relative error values are shown in Tables 1 and 2. By analysing these graphs and data, it is clear that the prediction effect of the model is pretty good.

6. Conclusions

This study is based on a basic premise that the historical stock price will have a great impact on the future short-term stock price. On this premise, we established an improved Elman model and collected the historical data of the Shanghai composite index and the Shenzhen composite index as a dataset for the experiment. As for dataset processing, we divided two datasets, one for training and the other for testing. In addition, the data were normalized. Regarding model building, we take MATLAB as the platform, and set the number of hidden-layer neurons to be 18. TRAINGDX is chosen to be the training function. In terms of training, the maximum number of iterations in the training is set to 3000, the maximum number of validation failures is set to 6, and the error tolerance is set to 0.00001. Finally, we use the model to test the training data and the test data. In order to analyse the experimental results, we also calculated relative error and the residuals and drew a picture to show them. Based on Elman network, this study predicted the short-term stock price in the future and achieved a good prediction effect. However, it is unrealistic to predict the long-term stock price in the future, which is difficult to achieve [2830]. This study provides an effective experimental method for predicting the near future stock price.

Data Availability

All of the data used in this study are already available on the Internet and is easily accessible.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the key project of the Natural Science Research of Higher Education Institutions in Anhui Province (Grant no. KJ2018A0461); the Anhui Province Key Research and Development Program Project (Grant no. 201904a05020091); the Provincial Quality Engineering Project from Department of Education Anhui Province (Grant no. 2019mooc283); and the Domestic and Foreign Research and Study Program for Outstanding Young Backbone Talents in Colleges and Universities (Grant no. Gxgnfx2019034).