Skip to main content

“It Looks All the Same to Me”: Cross-Index Training for Long-Term Financial Series Prediction

  • Conference paper
  • First Online:
Machine Learning, Optimization, and Data Science (LOD 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14505))

  • 185 Accesses

Abstract

We investigate a number of Artificial Neural Network architectures (well-known and more “exotic”) in application to the long-term financial time-series forecasts of indexes on different global markets. The particular area of interest of this research is to examine the correlation of these indexes’ behaviour in terms of Machine Learning algorithms cross-training. Would training an algorithm on an index from one global market produce similar or even better accuracy when such a model is applied for predicting another index from a different market? The demonstrated predominately positive answer to this question is another argument in favour of the long-debated Efficient Market Hypothesis of Eugene Fama.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Beheim, L., Zitouni, A., Belloir, F., de la Housse, C.D.M.: New RBF neural network classifier with optimized hidden neurons number. WSEAS Trans. Syst. (2), 467–472 (2004)

    Google Scholar 

  2. Broomhead, D.S., Lowe, D.: Radial basis functions, multi-variable functional interpolation and adaptive networks. Technical reprt, Royal Signals and Radar Establishment Malvern (United Kingdom) (1988)

    Google Scholar 

  3. Eun, C.S., Shim, S.: International transmission of stock market movements. J. Financ. Quant. Anal. 24(2), 241–256 (1989)

    Article  Google Scholar 

  4. Fama, E.F.: Efficient capital markets: a review of theory and empirical work. J. Financ. 25(2), 383–417 (1970)

    Article  Google Scholar 

  5. Girosi, F., Poggio, T.: Representation properties of networks: Kolmogorov’s theorem is irrelevant. Neural Comput. 1(4), 465–469 (1989)

    Article  Google Scholar 

  6. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Springer Series in Statistics. Springer, New York (2001)

    Google Scholar 

  7. He, Q.Q., Pang, P.C.I., Si, Y.W.: Multi-source transfer learning with ensemble for financial time series forecasting. In: 2020 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT), pp. 227–233. IEEE (2020)

    Google Scholar 

  8. Ivakhnenko, A.G.: Polynomial theory of complex systems. IEEE Trans. Syst. Man Cybern. 4, 364–378 (1971)

    Article  MathSciNet  Google Scholar 

  9. Jakaite, L., Schetinin, V., Maple, C., et al.: Bayesian assessment of newborn brain maturity from two-channel sleep electroencephalograms. Comput. Math. Methods Med. 2012 (2012)

    Google Scholar 

  10. Jensen, M.C.: The performance of mutual funds in the period 1945–1964. J. Financ. 23(2), 389–416 (1968)

    Google Scholar 

  11. Kolmogorov, A.N.: On the representation of continuous functions of several variables by superpositions of continuous functions of a smaller number of variables. Am. Math. Soc. (1961)

    Google Scholar 

  12. Kurkin, S.A., Pitsik, E.N., Musatov, V.Y., Runnova, A.E., Hramov, A.E.: Artificial neural networks as a tool for recognition of movements by electroencephalograms. In: ICINCO (1), pp. 176–181 (2018)

    Google Scholar 

  13. Kůrková, V.: Kolmogorov’s Theorem is relevant. Neural Comput. 3(4), 617–622 (1991)

    Article  Google Scholar 

  14. Nabipour, M., Nayyeri, P., Jabani, H., Mosavi, A., Salwana, E.: Deep learning for stock market prediction. Entropy 22(8), 840 (2020)

    Article  Google Scholar 

  15. Nyah, N., Jakaite, L., Schetinin, V., Sant, P., Aggoun, A.: Evolving polynomial neural networks for detecting abnormal patterns. In: 2016 IEEE 8th International Conference on Intelligent Systems (IS), pp. 74–80. IEEE (2016)

    Google Scholar 

  16. Park, J., Sandberg, I.W.: Universal approximation using radial-basis-function networks. Neural Comput. 3(2), 246–257 (1991)

    Article  Google Scholar 

  17. Pinkus, A.: Approximation theory of the MLP model in neural networks. Acta Numer. 8, 143–195 (1999)

    Article  MathSciNet  Google Scholar 

  18. Poterba, J.M., Summers, L.H.: Mean reversion in stock prices: evidence and implications. J. Financ. Econ. 22(1), 27–59 (1988)

    Article  Google Scholar 

  19. Ren, S., Sun, J., He, K., Zhang, X.: Deep residual learning for image recognition. In: CVPR, vol. 2, p. 4 (2016)

    Google Scholar 

  20. Roll, R.: Orange juice and weather. Am. Econ. Rev. 74(5), 861–880 (1984)

    Google Scholar 

  21. Roll, R.: What every CFO should know about scientific progress in financial economics: what is known and what remains to be resolved. Financ. Manage. 23(2), 69–75 (1994)

    Article  Google Scholar 

  22. Schetinin, V., Jakaite, L., Schult, J.: Informativeness of sleep cycle features in Bayesian assessment of newborn electroencephalographic maturation. In: 2011 24th International Symposium on Computer-Based Medical Systems (CBMS), pp. 1–6. IEEE (2011)

    Google Scholar 

  23. Selitskaya, N., et al.: Deep learning for biometric face recognition: experimental study on benchmark data sets. Deep Biomet. 71–97 (2020)

    Google Scholar 

  24. Selitskiy, S.: Kolmogorov’s gate non-linearity as a step toward much smaller artificial neural networks. In: Proceedings of the 24th International Conference on Enterprise Information Systems, vol. 1, pp. 492–499 (2022)

    Google Scholar 

  25. Selitskiy, S.: Elements of active continuous learning and uncertainty self-awareness: a narrow implementation for face and facial expression recognition. In: Goertzel, B., Iklé, M., Potapov, A., Ponomaryov, D. (eds.) AGI 2022. LNCS, vol. 13539, pp. 394–403. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19907-3_38

    Chapter  Google Scholar 

  26. Selitskiy, S., Christou, N., Selitskaya, N.: Using statistical and artificial neural networks meta-learning approaches for uncertainty isolation in face recognition by the established convolutional models. In: Nicosia, G., et al. (eds.) LOD 2021. LNCS, vol. 13164, pp. 338–352. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-95470-3_26

    Chapter  Google Scholar 

  27. Selitsky, S.: Hybrid convolutional-multilayer perceptron artificial neural network for person recognition by high gamma EEG features. Medicinskiy Vest. Severnogo Kavkaza 17(2), 192–196 (2022)

    Google Scholar 

  28. Sewell, M.: History of the efficient market hypothesis. Rn 11(04), 04 (2011)

    Google Scholar 

  29. Shleifer, A.: Inefficient Markets: An Introduction to Behavioural Finance. OUP, Oxford (2000)

    Book  Google Scholar 

  30. Stoean, C., Paja, W., Stoean, R., Sandita, A.: Deep architectures for long-term stock price prediction with a heuristic-based strategy for trading simulations. PLoS One 14(10), e0223593 (2019)

    Article  Google Scholar 

  31. Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

    Google Scholar 

  32. Tsang, G., Deng, J., Xie, X.: Recurrent neural networks for financial time-series modelling. In: 2018 24th International Conference on Pattern Recognition (ICPR), pp. 892–897. IEEE (2018)

    Google Scholar 

  33. Venugopal, V., Baets, W.: Neural networks and statistical techniques in marketing research: a conceptual comparison. Mark. Intell. Plann. (1994)

    Google Scholar 

  34. Wickstrøm, K., Kampffmeyer, M., Mikalsen, K.Ø., Jenssen, R.: Mixing up contrastive learning: self-supervised representation learning for time series. Pattern Recogn. Lett. 155, 54–61 (2022)

    Article  Google Scholar 

  35. Yen, G., Lee, C.F.: Efficient market hypothesis (EMH): past, present and future. Rev. Pac. Basin Financ. Mark. Policies 11(02), 305–329 (2008)

    Article  Google Scholar 

  36. Zhang, G.P.: A neural network ensemble method with jittered training data for time series forecasting. Inf. Sci. 177(23), 5329–5346 (2007)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stanislav Selitskiy .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Selitskiy, S. (2024). “It Looks All the Same to Me”: Cross-Index Training for Long-Term Financial Series Prediction. In: Nicosia, G., Ojha, V., La Malfa, E., La Malfa, G., Pardalos, P.M., Umeton, R. (eds) Machine Learning, Optimization, and Data Science. LOD 2023. Lecture Notes in Computer Science, vol 14505. Springer, Cham. https://doi.org/10.1007/978-3-031-53969-5_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-53969-5_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-53968-8

  • Online ISBN: 978-3-031-53969-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics