S7. Chaos, Machine Learning and Deep Learning
S7. Chaos, Machine Learning and Deep Learning
S7. Chaos, Machine Learning and Deep Learning
Abstract—In this study, we proposed a 2-stage hybrid approach and Deep Learning algorithms gained popularity, mainly due
for financial time series forecasting wherein chaos is modeled in to their inherent power to model non-linearity. Further, they
stage-1 followed by forecasting is accomplished using machine can adapt to the dynamic nature of the economy, leading to
learning and deep learning algorithms in stage-2. The effective-
ness of the proposed hybrid is tested on forecasting Consumer better forecasts.
Price Index Inflation of Food & Beverages, Fuel & Light, This paper proposes a 2-stage hybrid involving, in tandem,
and Headline in India. This is a first-of-its-kind study where chaos modeling and forecasting by a host of machine learn-
chaos is modeled and deep learning is employed in forecasting ing/deep learning algorithms.
macroeconomic time series. From the results, it is inferred that
The rest of the paper is organized as follows: Section 2
Chaos + Machine learning hybrids yielded better forecasts than
pure machine learning algorithms without Chaos in terms of presents introduction to chaos theory; Section 3 presents the
Symmetric Mean Absolute Percentage Error (SMAPE), Theil’s literature review; Section 4 presents in detail our proposed
U statistic and Directional statistic across all the data sets. A deep model; Section 5 presents the data set description and eval-
learning model namely, Long Short Term Memory (LSTM) was uation metrics; Section 6 presents a discussion of the results
also employed but without much success. The results of 2-stage
and finally section 7 concludes the paper and presents future
hybrid models are compared with models without accounting
for chaos. The results are encouraging and these hybrids can be directions.
applied to predict other financial time series.
Index Terms—Chaotic Time Series, Consumer price index II. I NTRODUCTION TO C HAOS T HEORY
inflation forecasting, GRRN, GMDH, Machine Learning, Deep In the late 1800s, the theory of chaos was proposed by
Learning
Poincare, and later extended by Lorenz [3] in 1963 in order
I. I NTRODUCTION to deal with unpredictable complex nonlinear systems [4]. A
chaotic system is deterministic, dynamic and evolves from the
Time series is a list of observations recorded over equal
initial conditions and it can be described by trajectories in the
intervals of time. Examples of time series include the stock
state space. As the governing equations are not known ahead
market price of a company, gold price, crude oil price,
for a chaotic system, the state space is represented by phase
foreign exchange rate, rainfall, temperature, etc. Time series
space, which we can rebuild from the original series. Packard
forecasting uses an algorithm to forecast future values based
et al. [5] developed a process to reconstruct the phase space
on the previously observed values. However, time series
using the method delays, using which for a time series Xj
analysis is the use of techniques to understand the series
where j = 1, 2, 3, .., N, the phase space can be constructed by
better. Decomposition is one such method to analysis the
a d-dimensional vector as in Eq.(1),
trend and seasonality of the series. Domains such as Weather
forecasting, earthquake prediction, astronomy, finance sector
use time series forecasting models daily to forecast predictions. Yj = (Xj , Xj+τ , Xi+jτ , ...., Xj+(d−1)τ ) (1)
Forecasting macroeconomic indicators is a difficult task due Where τ , d are lag and the embedding dimension of the
to the dynamic nature of the macro-economy. Forecasting system to reconstruct state space. Autocorrelation function or
Consumer Price Index Inflation plays a vital role in improving mutual information method is used to find the optimal lag
monetary policy formulation of a country [1]. Traditional value. Cao’s method [6] was used to estimate the minimum
algorithms such as Auto-regressive Integrated Moving Average embedding dimension (m) as in fig. The presence of chaos in
(ARIMA) and Seasonal ARIMA [2] are not able to capture the the system can be tested using Lyapunov exponent method [7],
dynamic non-linear nature of the inflation. Machine Learning [8], Kolmogorov entropy [9], Correlation dimension method
978-1-7281-2547-3/20/$31.00 ©2020 IEEE [9], false nearest neighbour algorithm [10] etc. In this study,
Authorized licensed use limited to: UNIVERSITY OF WESTERN ONTARIO. Downloaded on May 27,2021 at 22:35:25 UTC from IEEE Xplore. Restrictions apply.
the Lyapunov exponent method was employed to test the partition the series into blocks and looks for blocks which
presence of chaos. After the reconstruction of phase space, are closed to recent observations and this model found to
the problem gets converted into multiple input single output be outperforming the parametric linear, non-linear, univariate
(MISO) problem and MISO can be modelled by methods and multivariate alternatives during 1990 to 2015. As Neural
ranging from linear models to neural networks. Networks (NN) [21] have the capability learn complex patterns
from the data set, McAdam and McNelis [22] used NN to
A. Rosenstein’s method
predict the inflation in the United States, Japan, and the
Rosenstein’s algorithm [7] estimates the largest Lyapunov Europe. Nakamura and Emi [23] also found that NN models
Exponent (λ) [11] from the given time series. If λ >= 0 then outperformed linear forecasting algorithms at a small horizon
series contains chaos; otherwise it doesn’t contain chaos. of 1 to 2 quarters.
B. Cao’s method Orphanides and Van Norden [24] found that output gap
estimation improves inflation forecasting in real-time using
In order to find out the minimum embedding dimension for a univariate and bivariate models without using past inflation
time series, Cao developed the following method [6]. Let X = data. Using surveys to forecast inflation found to be outper-
(x1 , x2 , x3 , x4 , ...., xN ) be a time series. In the phase space, forming ARIMA variations and regressions using the Phillips
the time series can be reconstructed as time delay vectors as Curve motivated real activity measures [25]. Several works
in Eq.2: were published on inflation forecasting as it has an essential
role in monetary policy formulation in many countries [1],
Yi = (xi , xi+τ , xi+2τ , ..., xi+(m−1)λ ) (2) [23], [26], [27]. In India as Reserve Bank of India (RBI)
Where, Yi is the ith reconstructed vector, and τ is the time officially adopted the Flexible Inflation Targeting (FIT) regime
delay. in 2016. Stock and Watson [26] proposed an Unobserved Com-
ponent Stochastic Volatility Model (UCSV) which improved
III. L ITERATURE S URVEY the United States (US) inflation forecasting. And for the same
Forecasting algorithms of macroeconomics mainly employ Random forest [28], a machine learning bagging model based
two distinct strategies (i) structural and (ii) non-structural on Decision Trees [29] seems to be outperforming the linear
[12]. The structural approaches use the economic theory for models [27], [30].
model specification whereas the latter follow a data-driven In the Indian context, Kapur [31] applied augmented Phillips
approach i.e. estimating the model specification using under- curve frame to forecast inflation and found that non-fuel
lying properties of the data. The preferred forecasting algo- inflation is driving the domestic inflation. Pradhan et al. [32]
rithms for the non-structural approach are ARIMA and Vector used NN models to forecast inflation and economic growth in
auto-regression (VAR) [13]. However, these linear forecasting India from 1994 to 2009. Malhotra and Maloo [33], Anand et
models failed to recognise macroeconomic business patterns, al. [34] and Pradhan [35] also applied multi-variate Artificial
periods of severe volatility and regime shifts during the 1980s Neural Network (ANN) models to forecast inflation in India.
and 1990s which led to the popularity of non-linear models Pratap and Sengupta [1], in an unpublished work, employed
[14]. Even though inflation highly depends on seasonality, the machine learning techniques to forecast Consumer Price
Dua and Kumawat [15] found that incorporating nonstationary Index inflation in India and they used Auto-correlation func-
seasonality into statistical models does not lead to significant tion (ACF) to find-out the lag of each CPI series in order to
improvements in forecasting results. transform the series into appropriate input and target variables
The research community is now focused on using Machine for training machine learning models. The following are the
Learning (ML) algorithms for time series forecasting [16] as very few works, where chaos was systematically modelled
they are both non-linear and dynamic in nature. Yun Liao before forecasting was performed. Pradeepkumar and Ravi
[17] conducted an experiment to test the effectiveness of [36]–[38] successfully applied hybrids of Chaos modelling and
ML vs Statistical methods (ARIMA, Bayesian Forecasting, Quantile Regression Random Forest (QRRF) for FOREX rate
economic theory based models such as Phillips curve, term forecasting and they have also proposed chaos modelling and
structure and asset model pricing) and found that ML methods multivariate adaptive regression splines (MARS) [39] for the
outperformed these models on 11 financial series datasets. foreign exchange rate forecasting. Most recently, hybrids of
The superiority of Machine learning algorithms over statistical chaos theory, neural networks, and Multi-objective evolution-
models is not taken for granted as it depends mainly upon the ary algorithm are proposed to predict FOREX rate [40]. In
underlying data generating process (DGP), data quality and the Banking Sector, chaos modelling with machine learning
the forecast horizon [16], [18]. As explained by Makridakis algorithms for forecasting turned out to perform better than
and Hibon [19], mathematically sophisticated or non-linear models without explicit chaos models in the case of automated
models may not certainly provide better estimates compared teller machines (ATMs) cash demand prediction [41].
to more mere models, and the performance of an ensemble Long short term memory (LSTM), a deep sequential learn-
of various models beats any specific model. As inflation is ing algorithm was proposed by [42] to solve difficult sequence
unpredictable during the times of crises, Pablo and Molin [20] problems. As LSTM good at solving sequence problems, many
proposed a parsimonious semiparametric method, which will works have been published [43]–[46] harnessed the power of
2552
Authorized licensed use limited to: UNIVERSITY OF WESTERN ONTARIO. Downloaded on May 27,2021 at 22:35:25 UTC from IEEE Xplore. Restrictions apply.
LSTM to forecast both financial and non-financial time series. TABLE I
S UMMARY S TATISTICS OF THE CPI INFLATION
A. Data preprocessing
As deep learning and machine learning algorithms require
the data to be normalized, we used standard min-max scaling
and to make the non-stationary time series into stationary
series, we adopted the differencing method.
Fig. 1. Schematic diagram of the 2-Stage Hybrid Model.
Standard Min-Max Normalization All the values in the
series will be scaled down to [0, 1] as in Eq. (3):
V. DATA S ET D ESCRIPTION AND E VALUATION METRICS
X − Xmin
We considered the consumer price index (CPI) infla- Xnorm = (3)
Xmax − Xmin
tion data of food & beverages, fuel & light and headline
(http://164.100.34.62:8080/Default1.aspx) from the Ministry We considered the last 6 months’ data of each series to be
of Statistics and Programme Implementation (MoSPI), Gov- the test set and previous data to be the training data.
ernment of India. The dataset contains the monthly percentage
change in the CPI inflation starting from Jan 2012 to Dec
B. Evalution Metrics
2018. The standard deviation for all CPI inflation datasets is
high, which clearly indicates that inflation in India is highly We considered the Symmetric Mean Absolute Percentage
volatile (see Table I). Fig.1 shows the monthly CPI Headline Error (SMAPE), Theil’s U statistic ( [53]–[55] and DS statistic
and Components plot over 7 years. [56] to choose a model with the best quality forecast.
2553
Authorized licensed use limited to: UNIVERSITY OF WESTERN ONTARIO. Downloaded on May 27,2021 at 22:35:25 UTC from IEEE Xplore. Restrictions apply.
1) Symmetric Mean Absolute Percentage Error (SMAPE):
SMAPE is invariant to the scale of the series data.
100%
n
|Ft − At |
SM AP E = (4)
n t=1 (|At | + |Ft |) /2
2554
Authorized licensed use limited to: UNIVERSITY OF WESTERN ONTARIO. Downloaded on May 27,2021 at 22:35:25 UTC from IEEE Xplore. Restrictions apply.
TABLE V
E XPERIMENTAL RESULTS FOR CPI H EADLILNE INFLATION
2555
Authorized licensed use limited to: UNIVERSITY OF WESTERN ONTARIO. Downloaded on May 27,2021 at 22:35:25 UTC from IEEE Xplore. Restrictions apply.
TABLE VII [5] N. H. Packard, J. P. Crutchfield, J. D. Farmer, and R. S. Shaw, “Geometry
H YPER PARAMETERS FOR CPI F UEL & L IGHT from a time series,” Physical Review Letters, vol. 45, no. 9, pp. 712–716,
1980.
Algorithm Hyperparameters [6] L. Cao, “Practical method for determining the minimum embedding
ARIMA (p, d, q) = (7, 1, 1) dimension of a scalar time series,” Physica D: Nonlinear Phenomena,
Random Forest # estimators = 5; Max Depth = 2 vol. 110, no. 1-2, pp. 43–50, 1997.
XGBoost Max Depth = 8 [7] M. T. Rosenstein, J. J. Collins, and C. J. De Luca, “A practical
SVM Kernel = sigmoid; C=31; gamma=0.0018 method for calculating largest Lyapunov exponents from small data sets,”
KNN #neighbors=7 Physica D: Nonlinear Phenomena, vol. 65, no. 1-2, pp. 117–134, 1993.
MLP learning rate = 0.1; solver = adam; nodes = 5; [8] J.-P. Eckmann, S. O. Kamphorst, D. Ruelle, and S. Ciliberto, “Liapunov
activation = tanh; max iter=1000 exponents from time series,” Physical Review A, vol. 34, no. 6, p. 4971,
GRNN smothing factor = 0.052 1986.
GMDH nodes = 7 [9] P. Grassberger and I. Procaccia, “Measuring the Strangeness of Strange
Attractors,” in The Theory of Chaotic Attractors. Springer New York,
2004, pp. 170–189.
[10] M. B. Kennel, R. Brown, and H. D. Abarbanel, “Determining embedding
TABLE VIII dimension for phase-space reconstruction using a geometrical construc-
H YPER PARAMETERS FOR CPI H EADLINE tion,” Physical Review A, vol. 45, no. 6, pp. 3403–3411, 1992.
[11] A. Lyapunov, “Problème général de la stabilité du mouvement,” in
Algorithm Hyperparameters Annales de la Faculté des sciences de Toulouse: Mathématiques, vol. 9,
ARIMA (p, d, q) = (8, 1, 1) 1907, pp. 203–474.
Random Forest # estimators = 17; Max Depth = 2 [12] F. X. Diebold, “The Past, Present, and Future of Macroeconomic
XGBoost Max Depth = 5 Forecasting,” Journal of Economic Perspectives, vol. 12, no. 2, pp. 175–
SVM Kernel = rbf; C=271; gamma=0.0047 192, 1998.
KNN #neighbors=2 [13] C. A. Sims, “Macroeconomics and reality,” Econometrica, vol. 48, no. 1,
MLP learning rate = 0.01; solver = adam; nodes = 67; pp. 1–48, 1980.
activation = tanh; max iter=1000 [14] A. Sanyal and I. Roy, “Forecasting Major Macroeconomic Variables
GRNN smothing factor = 0.051 in India-Performance Comparison of Linear, Non-linear Models and
GMDH nodes = 8 Forecast Combinations,” RBI Working Paper Series No.11, Tech. Rep.,
2014.
[15] P. Dua, L. Kumawat, and Others, “Modelling and forecasting seasonality
in Indian macroeconomic time series,” Delhi: Centre for Development
TABLE IX Economics, Delhi School of Economics. Working Paper, vol. 136, 2005.
H YPER PARAMETERS FOR CPI F OOD & B EVERAGES [16] S. Makridakis, E. Spiliotis, and V. Assimakopoulos, “Statistical and
Machine Learning forecasting methods: Concerns and ways forward,”
Algorithm Hyperparameters PLOS ONE, vol. 13, no. 3, p. e0194889, 2018.
ARIMA (p, d, q) = (8, 0, 1) [17] Y. Liao and Others, “Machine Learning in Macro-Economic Series
Random Forest # estimators = 19; Max Depth = 2 Forecasting,” International Journal of Economics and Finance, vol. 9,
XGBoost Max Depth = 2 no. 12, pp. 71–76, 2017.
SVM Kernel = rbf; C=271; gamma=0.0027 [18] G. P. Zhang, “Avoiding pitfalls in neural network research,” IEEE
KNN #neighbors=2 Transactions on Systems, Man and Cybernetics Part C: Applications
MLP learning rate = 0.1; solver = adam; nodes = 30; and Reviews, vol. 37, no. 1, pp. 3–16, 2007.
activation = tanh; max iter=1000 [19] S. Makridakis and M. Hibon, “The M3-competition: Results, conclu-
GRNN smothing factor = 0.0527 sions and implications,” International Journal of Forecasting, vol. 16,
GMDH nodes = 8 no. 4, pp. 451–476, 2000.
[20] P. Guerron and M. Zhong, “Macroeconomic forecasting in times of
crises,” Board of Governors of the Federal Reserve System (U.S.),
Finance and Economics Discussion Series 2017-018, 2017.
[21] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning repre-
time series namely consumer price index inflation. The 2- sentations by back-propagating errors,” Nature, vol. 323, no. 6088, pp.
stage method consists of modelling chaos in the first stage 533–536, 1986.
and forecasting via machine learning algorithms in the second [22] P. McAdam and P. McNelis, “Forecasting inflation with thick models
and neural networks,” Economic Modelling, vol. 22, no. 5, pp. 848–867,
stage. The results indicate the superiority of Chaos + Machine 2005.
Learning models over pure machine learning models without [23] E. Nakamura, “Inflation forecasting using a neural network,” Economics
chaos. Theil’s U Statistic confirmed the superiority of the Letters, vol. 86, no. 3, pp. 373–378, 2005.
[24] A. Orphanides and S. van Norden, “The Reliability of Inflation Forecasts
proposed hybrid models. Although we tested the proposed Based on Output Gap Estimates in Real Time,” Journal of Money, Credit
hybrids on CPI dataset, we can extend proposed method to and Banking, vol. 37, no. 3, pp. 583–601, 2005.
other financial and non-financial time series dataset. [25] A. Ang, G. Bekaert, and M. Wei, “Do macro variables, asset markets,
or surveys forecast inflation better?” Journal of Monetary Economics,
vol. 54, no. 4, pp. 1163–1212, may 2007.
R EFERENCES [26] J. H. STOCK and M. W. WATSON, “Why Has U.S. Inflation Become
Harder to Forecast?” Journal of Money, Credit and Banking, vol. 39,
[1] B. Pratap and S. Sengupta, “Macroeconomic Forecasting in India: Does no. SUPPL.1, pp. 3–33, jan 2007.
Machine Learning Hold the Key to Better Forecasts?” RBI Working [27] M. C. Medeiros, G. F. Vasconcelos, Á. Veiga, and E. Zilberman, “Fore-
Paper Series.No.04, Tech. Rep., 2019. casting Inflation in a Data-Rich Environment: The Benefits of Machine
[2] G. E. P. Box and G. M. Jenkins, “Time series analysis: Forecasting and Learning Methods,” Journal of Business and Economic Statistics, 2019.
control Holden-Day,” San Francisco, p. 498, 1970. [28] T. K. Ho, “Random decision forests,” in Proceedings of the International
[3] E. N. Lorenz, “Deterministic nonperiodic flow,” Journal of the atmo- Conference on Document Analysis and Recognition, ICDAR, 1995.
spheric sciences, vol. 20, no. 2, pp. 130–141, 1963. [29] L. Breiman, J. Friedman, C. J. Stone, and R. A. Olshen, Classification
[4] C. T. Dhanya and D. Nagesh Kumar, “Nonlinear ensemble prediction and regression trees. CRC press, 1984.
of chaotic daily rainfall,” Advances in Water Resources, vol. 33, no. 3, [30] V. Ülke, A. Sahin, and A. Subasi, “A comparison of time series and
pp. 327–347, 2010. machine learning models for inflation forecasting: empirical evidence
2556
Authorized licensed use limited to: UNIVERSITY OF WESTERN ONTARIO. Downloaded on May 27,2021 at 22:35:25 UTC from IEEE Xplore. Restrictions apply.
from the USA,” Neural Computing and Applications, vol. 30, no. 5, pp. [54] F. Bliemel, “Theil’s Forecast Accuracy Coefficient: A Clarification,”
1519–1527, 2018. Journal of Marketing Research, vol. 10, no. 4, p. 444, 1973.
[31] M. Kapur, “Revisiting the Phillips curve for India and inflation forecast- [55] C. W. J. Granger and P. Newbold, “Some comments on the evaluation
ing,” Journal of Asian Economics, vol. 25, pp. 17–27, apr 2013. of economic forecasts,” Applied Economics, vol. 5, no. 1, pp. 35–47,
[32] M. Pradhan, R. Balakrishnan, R. Baqir, G. M. Heenan, S. Nowak, 1973.
C. Oner, and S. P. Panth, “Policy Responses to Capital Flows in [56] A. J. Lawrance, “Directionality and Reversibility in Time Series,”
Emerging Markets,” International Monetary Fund, IMF Staff Discussion International Statistical Review / Revue Internationale de Statistique,
Notes 11/10, 2011. vol. 59, no. 1, p. 67, 1991.
[33] A. Malhotra and M. Maloo, “Understanding Food Inflation in India: A [57] F. X. Diebold and R. S. Mariano, “Comparing predictive accuracy,”
Machine Learning Approach,” SSRN Electronic Journal, 2017. Journal of Business and Economic Statistics, vol. 13, no. 3, pp. 253–
[34] R. Anand, D. Ding, and V. Tulin, Food inflation in India: The role for 263, 1995.
monetary policy. International Monetary Fund, 2014, no. 14-178.
[35] R. P. Pradhan, “Forecasting inflation in India: An application of ANN
Model,” International Journal of Asian Business and Information Man-
agement (IJABIM), vol. 2, no. 2, pp. 64–73, 2011.
[36] D. Pradeepkumar and V. Ravi, “Forex rate prediction using chaos, neural
network and particle swarm optimization,” in Lecture Notes in Computer
Science (including subseries Lecture Notes in Artificial Intelligence and
Lecture Notes in Bioinformatics), vol. 8795. Springer Verlag, 2014,
pp. 363–375.
[37] ——, “FOREX Rate prediction using Chaos and Quantile Regression
Random Forest,” in 2016 3rd International Conference on Recent
Advances in Information Technology, RAIT 2016. Institute of Electrical
and Electronics Engineers Inc., 2016, pp. 517–522.
[38] ——, “FOREX rate prediction: A hybrid approach using chaos theory
and multivariate adaptive regression splines,” in Advances in Intelligent
Systems and Computing, vol. 515. Springer Verlag, 2017, pp. 219–227.
[39] J. H. Friedman, “Multivariate Adaptive Regression Splines,” Annals of
Statistics, vol. 19, no. 1, pp. 1–67, 1991.
[40] V. Ravi, D. Pradeepkumar, and K. Deb, “Financial time series predic-
tion using hybrids of chaos theory, multi-layer perceptron and multi-
objective evolutionary algorithms,” Swarm and Evolutionary Computa-
tion, vol. 36, pp. 136–149, 2017.
[41] V. Kamini, V. Ravi, and D. N. Kumar, “Chaotic time series analysis
with neural networks to forecast cash demand in ATMs,” in 2014 IEEE
International Conference on Computational Intelligence and Computing
Research, IEEE ICCIC 2014. Institute of Electrical and Electronics
Engineers Inc., sep 2015.
[42] S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural
Computation, vol. 9, no. 8, pp. 1735–1780, nov 1997.
[43] S. Siami-Namini and A. S. Namin, “Forecasting economics and financial
time series: ARIMA vs. LSTM,” arXiv preprint arXiv:1803.06386, 2018.
[44] W. Bao, J. Yue, and Y. Rao, “A deep learning framework for financial
time series using stacked autoencoders and long-short term memory,”
PloS one, vol. 12, no. 7, 2017.
[45] F. Karim, S. Majumdar, H. Darabi, and S. Chen, “LSTM fully convolu-
tional networks for time series classification,” IEEE access, vol. 6, pp.
1662–1669, 2017.
[46] S. Selvin, R. Vinayakumar, E. A. Gopalakrishnan, V. K. Menon, and
K. P. Soman, “Stock price prediction using LSTM, RNN and CNN-
sliding window model,” in 2017 international conference on advances
in computing, communications and informatics (icacci). IEEE, 2017,
pp. 1643–1647.
[47] T. K. Ho, “Random decision forests,” in Proceedings of 3rd International
Conference on Document Analysis and Recognition, vol. 1, 1995, pp.
278–282.
[48] H. Drucker, C. J. C. Burges, L. Kaufman, A. Smola, and V. Vapnik,
“Support vector regression machines,” in Proceedings of the 9th In-
ternational Conference on Neural Information Processing Systems, ser.
NIPS’96. Cambridge, MA, USA: MIT Press, 1996, p. 155–161.
[49] T. Chen and C. Guestrin, “XGBoost: A scalable tree boosting system,”
in Proceedings of the ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining, 2016.
[50] A. G. Ivakhnenko, “The group method of data of handling; a rival of the
method of stochastic approximation,” Soviet Automatic Control, vol. 13,
pp. 43–55, 1968.
[51] D. Specht, “A general regression neural network,” IEEE transactions on
neural networks / a publication of the IEEE Neural Networks Council,
vol. 2, pp. 568–76, 1991.
[52] R. Mushtaq, “Augmented Dickey Fuller Test,” SSRN Electronic Journal,
2012.
[53] H. Theil, Applied economic forecasting,. Amsterdam: North-Holland
Pub. Co, 1966.
2557
Authorized licensed use limited to: UNIVERSITY OF WESTERN ONTARIO. Downloaded on May 27,2021 at 22:35:25 UTC from IEEE Xplore. Restrictions apply.