Skip to main content

DeepEX: Bridging the Gap Between Knowledge and Data Driven Techniques for Time Series Forecasting

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2019: Deep Learning (ICANN 2019)

Abstract

Artificial Intelligence (AI) can roughly be categorized into two streams, knowledge driven and data driven both of which have their own advantages. Incorporating knowledge into Deep Neural Networks (DNN), that are purely data driven, can potentially improve the overall performance of the system. This paper presents such a fusion scheme, DeepEX, that combines these seemingly parallel streams of AI, for multi-step time-series forecasting problems. DeepEX achieves this in a way that merges best of both worlds along with a reduction in the amount of data required to train these models. This direction has been explored in the past for single step forecasting by opting for a residual learning scheme. We analyze the shortcomings of this simple residual learning scheme and enable DeepEX to not only avoid these shortcomings but also scale to multi-step prediction problems. DeepEX is tested on two commonly used time series forecasting datasets, CIF2016 and NN5, where it achieves competitive results even when trained on a reduced set of training examples. Incorporating external knowledge to reduce network’s reliance on large amount of accurately labeled data will prove to be extremely effective in training of neural networks for real-world applications where the dataset sizes are small and labeling is expensive.

Code available at https://www.github.com/MAchattha4/DeepEX.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/M4Competition/M4-methods/blob/master/005%20-%20vangspiliot/Method-Description-4Theta.pdf.

References

  1. Bandara, K., Bergmeir, C., Smyl, S.: Forecasting across time series databases using recurrent neural networks on groups of similar series: a clustering approach. arXiv preprint arXiv:1710.03222 (2017)

  2. Bergmeir, C., Hyndman, R.J., Benítez, J.M.: Bagging exponential smoothing methods using STL decomposition and box-cox transformation. Int. J. Forecast. 32(2), 303–312 (2016)

    Article  Google Scholar 

  3. Buda, T.S., Caglayan, B., Assem, H.: DeepAD: a generic framework based on deep learning for time series anomaly detection. In: Phung, D., Tseng, V.S., Webb, G.I., Ho, B., Ganji, M., Rashidi, L. (eds.) PAKDD 2018. LNCS (LNAI), vol. 10937, pp. 577–588. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-93034-3_46

    Chapter  Google Scholar 

  4. Chattha, M.A., Siddiqui, S.A., Malik, M.I., van Elst, L., Dengel, A., Ahmed, S.: Kinn. arXiv preprint arXiv:1902.05653 (2019)

  5. Columbus, L.: Gartner’s hype cycle for emerging technologies, 2017 adds 5g and deep learning for first time. Forbes/Tech/# CuttingEdge (2017)

    Google Scholar 

  6. Conneau, A., Kiela, D., Schwenk, H., Barrault, L., Bordes, A.: Supervised learning of universal sentence representations from natural language inference data. arXiv preprint arXiv:1705.02364 (2017)

  7. Ghazvininejad, M., et al.: A knowledge-grounded neural conversation model. In: AAAI (2018)

    Google Scholar 

  8. Hinton, G., et al.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process. Mag. 29(6), 82–97 (2012)

    Article  Google Scholar 

  9. Hu, Z., Ma, X., Liu, Z., Hovy, E., Xing, E.: Harnessing deep neural networks with logic rules. arXiv preprint arXiv:1603.06318 (2016)

  10. Lake, B.M., Salakhutdinov, R., Tenenbaum, J.B.: Human-level concept learning through probabilistic program induction. Science 350(6266), 1332–1338 (2015)

    Article  MathSciNet  Google Scholar 

  11. Makridakis, S., Hibon, M.: The M3-Competition: results, conclusions and implications. Int. J. Forecast. 16(4), 451–476 (2000)

    Article  Google Scholar 

  12. Makridakis, S., Spiliotis, E., Assimakopoulos, V.: The M4 Competition: results, findings, conclusion and way forward. Int. J. Forecast. 34(4), 802–808 (2018)

    Article  Google Scholar 

  13. Munir, M., Siddiqui, S.A., Chattha, M.A., Dengel, A., Ahmed, S.: FuseAD: unsupervised anomaly detection in streaming sensors data by fusing statistical and deep learning models. Sensors 19(11), 2451 (2019)

    Article  Google Scholar 

  14. Nelson, M., Hill, T., Remus, W., O’Connor, M.: Time series forecasting using neural networks: should the data be deseasonalized first? Journal of forecasting 18(5), 359–367 (1999)

    Article  Google Scholar 

  15. Silver, D., et al.: Mastering the game of go without human knowledge. Nature 550(7676), 354 (2017)

    Article  Google Scholar 

  16. Taieb, S.B., Bontempi, G., Atiya, A.F., Sorjamaa, A.: A review and comparison of strategies for multi-step ahead time series forecasting based on the NN5 forecasting competition. Expert Syst. Appl. 39(8), 7067–7083 (2012)

    Article  Google Scholar 

  17. Towell, G.G., Shavlik, J.W.: Knowledge-based artificial neural networks. Artif. Intell. 70(1–2), 119–165 (1994)

    Article  Google Scholar 

  18. Tran, S.N., Garcez, A.S.D.: Deep logic networks: inserting and extracting knowledge from deep belief networks. IEEE Trans. Neural Networks Learn. Syst. 29(2), 246–258 (2018)

    Article  MathSciNet  Google Scholar 

  19. Ullman, J.D.: Principles of Database and Knowledge-base Systems, vol. 1. Computer Science Press Incorporated, Rockville (1988)

    Google Scholar 

  20. Venugopalan, S., Hendricks, L.A., Mooney, R., Saenko, K.: Improving lstm-based video description with linguistic knowledge mined from text. arXiv preprint arXiv:1604.01729 (2016)

  21. Zoph, B., Vasudevan, V., Shlens, J., Le, Q.V.: Learning transferable architectures for scalable image recognition. In: (CVPR), June 2018

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Muhammad Ali Chattha or Sheraz Ahmed .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chattha, M.A. et al. (2019). DeepEX: Bridging the Gap Between Knowledge and Data Driven Techniques for Time Series Forecasting. In: Tetko, I., Kůrková, V., Karpov, P., Theis, F. (eds) Artificial Neural Networks and Machine Learning – ICANN 2019: Deep Learning. ICANN 2019. Lecture Notes in Computer Science(), vol 11728. Springer, Cham. https://doi.org/10.1007/978-3-030-30484-3_51

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-30484-3_51

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-30483-6

  • Online ISBN: 978-3-030-30484-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics