Skip to main content
Log in

DeepCascade-WR: a cascading deep architecture based on weak results for time series prediction

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Noisy and nonstationary real-world time series predictions (TSPs) are challenging tasks. Confronted with these challenging tasks, the predictive power of traditional shallow models is commonly not satisfactory enough. While the research on deep learning (DL) has made milestone breakthrough in recent years, and DL paradigm has gradually become indispensable for accomplishing these complex tasks. In this work, a cascading deep architecture based on weak results (DeepCascade-WR) is established, which possesses deep models’ marked capability of feature representation learning based on complex data. In DeepCascade-WR, weak prediction results are defined, innovating the forecasting mode of traditional TSP. The original data will be properly reconstituted with prior knowledge, generating attribute vectors with valid predictive information. DeepCascade-WR possesses online learning ability and effectively avoids the retraining problem, owing to the property of OS-ELM, one base model of DeepCascade-WR. Besides, ELM is exploited as another base model of DeepCascade-WR, therefore, DeepCascade-WR naturally inherits some valuable virtues from ELM, including faster training speed, better generalization ability and the avoidance of being fallen into local optima. Ultimately, in the empirical results, DeepCascade-WR demonstrates its superior predictive performance on five benchmark financial datasets, i.e., ^DJI, ^GSK, ^HSI, JOUT, and S&P 500 Index, compared with its base learners and other state-of-the-art algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Crone SF, Hibon M, Nikolopoulos K (2011) Advances in forecasting with neural networks? Empirical evidence from the NN3 competition on time series prediction. Int J Forecast 27:635–660

    Article  Google Scholar 

  2. Villarreal J, BaffesP (2015) Time series prediction using neural networks. In: Expert systems for civil engineers: knowledge representation, American Society of Civil Engineers, pp 268–282

  3. Chandra R (2015) Competition and collaboration in cooperative coevolution of Elman recurrent neural networks for time-series prediction. IEEE Trans Neural Netw Learn Syst 26:3123–3136

    Article  MathSciNet  Google Scholar 

  4. Dieleman S, Willett KW, Dambre J (2015) Rotation-invariant convolutional neural networks for galaxy morphology prediction. Mon Not R Astron Soc 450:1441–1459

    Article  Google Scholar 

  5. Gaxiola F, Melin P, Valdez F, Castillo O (2015) Generalized type-2 fuzzy weight adjustment for backpropagation neural networks in time series prediction. Inf Sci 325:159–174

    Article  MathSciNet  Google Scholar 

  6. Melin P, Castillo O, Mancilla A, Lopez M (2016) Simulation and forecasting complex economic time series using neural network models. J Intell Syst 14:193–212

    Google Scholar 

  7. Wang L, Zeng Y, Chen T (2015) Back propagation neural network with adaptive differential evolution algorithm for time series forecasting. Expert Syst Appl 42:855–863

    Article  Google Scholar 

  8. Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. J. Nature 323:533–536

    Article  Google Scholar 

  9. Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70:489–501

    Article  Google Scholar 

  10. Collobert R, Weston J (2008) A unified architecture for natural language processing: deep neural networks with multitask learning. In: Proceedings of international conference on machine learning, pp 160–167

  11. Goodfellow IJ, Bulatov Y, Ibarz J, Arnoud S, Shet V (2014) Multi-digit number recognition from street view imagery using deep convolutional neural networks. arXiv:1312.6082 [cs.CV]

  12. Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. Science 313:504–507

    Article  MathSciNet  Google Scholar 

  13. Huval B, Coates A, Ng AY (2013) Deep learning for class-generic object detection. arXiv:1312.6885 [cs.CV]

  14. Shin HC, Orton MR, Collins DJ, Doran SJ, Leach MO (2013) Stacked autoencoders for unsupervised feature learning and multiple organ detection in a pilot study using 4d patient data. IEEE Trans Pattern Anal Mach Intell 35:1930–1943

    Article  Google Scholar 

  15. Liang NY, Huang GB, Saratchandran P et al (2006) A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans Neural Netw 17:1411–1423

    Article  Google Scholar 

  16. Schölkopf B, Platt J, Hofmann T (2006) Greedy layer-wise training of deep networks. In: Expert systems for civil engineers: advances in neural information processing systems, pp 153–160

  17. Golub GH, Loan CFV (1996) Matrix computations, 3rd ed. DBLP, pp 463–535

  18. Wei B, Yue J, Rao Y (2017) A deep learning framework for financial time series using stacked autoencoders and long-short term memory. PLoS One 12(7):e0180944. https://doi.org/10.1371/journal.pone.0180944

    Article  Google Scholar 

  19. Kuremoto T, Kimura S, Kobayashi K, Obayashi M (2014) Time series forecasting using a deep belief network with restricted Boltzmann machines. Neurocomputing 137:47–56

    Article  Google Scholar 

  20. YaHoo Finance (2014) Electronic bulletin board online. http://finance.yahoo.com

  21. Bartley WW (2010) Evolutionary epistemology, rationality, and the sociology of knowledge. In: Philosophical books, vol 30, pp 94–97

  22. Pearsall J (2001) The concise Oxford Dictionary. Oxford University Press, Oxford

    Google Scholar 

  23. Krizhevsky A, Sutskever I, Hinton GE (2012) ImageNet classification with deep convolutional neural networks. In: Advances in neural information processing systems 25 (NIPS 2012)

  24. Tang J, Deng C, Huang GB (2015) Extreme learning machine for multilayer perceptron. IEEE Trans Neural Netw Learn Syst 27:809–821

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China under the Grant no. 61473150.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qun Dai.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, C., Dai, Q. & Song, G. DeepCascade-WR: a cascading deep architecture based on weak results for time series prediction. Int. J. Mach. Learn. & Cyber. 11, 825–840 (2020). https://doi.org/10.1007/s13042-019-00994-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-019-00994-7

Keywords

Navigation