Skip to main content

Two-Stage Trained Stacking Model for Univariate Time Series Forecasting

  • Conference paper
  • First Online:
Web Information Systems Engineering – WISE 2024 (WISE 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15436))

Included in the following conference series:

  • 308 Accesses

Abstract

The stacking ensemble model is widely used in the forecasting of univariate time series data. It works by combining the predictions of multiple models. It has been applied across fields such as economics, energy, and healthcare, where data often fluctuates frequently and comes in diverse forms. First, a set of base models is trained on the dataset to make initial predictions. These predictions are then used as input features for the training of a meta-model. Finally, in subsequent forecasts, the trained meta-model merges the new predictions of the base models to provide a more accurate forecast. However, most stacking models directly use all available data to train the base models once and stack their predictions to train the meta-model. This may lead to overfitting because they train the base models on the entire dataset, including the part of the actual labels for training the meta-model, potentially causing target leakage for the meta-model. To address this issue, we propose a two-stage trained stacking model. The input data is divided into training and label parts. In the first stage, the base models are trained on the training part, and the predictions of the base models are combined with the label part to train the meta-model. In the second stage, the base models are retrained with all input data, and the meta-model trained in the first stage is used for the final prediction. This approach helps mitigate overfitting in the prediction phase caused by target leakage during the training process. We test our model on three different types of datasets. Experimental results show that our stacking ensemble model outperforms the individual base models on all datasets in terms of MAE and MSE, demonstrating not only good generalizability but also improved performance across various scenarios. Additionally, we compared our two-stage trained stacking model with a basic stacking ensemble model framework. The results suggest our model provides more accurate predictions for datasets without clear seasonal features. The code is available at https://github.com/HaiMianXiongDi/Two-Stage-trained-Stacking-Model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ganaie, M.A., Hu, M., Malik, A.K., Tanveer, M., Suganthan, P.N.: Ensemble deep learning: a review. arXiv preprint arXiv:2104.02395 (2021)

  2. Zhang, Y., Liu, J., Shen, W.: A review of ensemble learning algorithms used in remote sensing applications. Appl. Sci. 12(17), 8654 (2022)

    Article  Google Scholar 

  3. Wen, L., Hughes, M.: Coastal wetland mapping using ensemble learning algorithms: a comparative study of bagging, boosting and stacking techniques. Remote Sens. 12(10), 1683 (2020)

    Article  Google Scholar 

  4. Whalen, S., Pandey, O.P., Pandey, G.: Predicting protein function and other biomedical characteristics with heterogeneous ensembles. Methods 93, 92–102 (2016)

    Article  Google Scholar 

  5. Liu, H., Cao, H., Song, E., et al.: Multi-model ensemble learning architecture based on 3D CNN for lung nodule malignancy suspiciousness classification. J. Digit. Imag. 33, 1242–1256 (2020)

    Article  Google Scholar 

  6. Li, Y., Pan, Y.: A novel ensemble deep learning model for stock prediction based on stock prices and news. Int. J. Data Sci. Anal. 13, 139–149 (2022)

    Article  Google Scholar 

  7. Mungoli, N.: Adaptive ensemble learning: boosting model performance through intelligent feature fusion in deep neural networks (2023)

    Google Scholar 

  8. Adhikari, R.: A neural network based linear ensemble framework for time series forecasting. Neurocomputing 157, 231–242 (2015)

    Article  Google Scholar 

  9. Mohammed, A., Kora, R.: A comprehensive review on ensemble deep learning: opportunities and challenges. J. King Saud Univ. Comput. Inf. Sci. 35(2), 757–774 (2023)

    Google Scholar 

  10. Faska, Z., Khrissi, L., Haddouch, K., et al.: A robust and consistent stack generalized ensemble-learning framework for image segmentation. J. Eng. Appl. Sc. 70, 74 (2023)

    Article  Google Scholar 

  11. Y., Zhang, Z.: From equivalent linear equations to Gauss-Markov theorem. J. Inequalities Appl. 2018, 1–10 (2018)

    Google Scholar 

  12. Author(s).: Least squares estimation for the Gauss-Markov model. Springer 22, 311–325 (2020)

    Google Scholar 

  13. Vafaeipour, M., Rahbari, O., Rosen, M.A., Fazelpour, F., Ansarirad, P.: Application of sliding window technique for prediction of wind velocity time series. Int. J. Energy Environ. Eng. 5(2), 105–112 (2014)

    Article  Google Scholar 

  14. Li, X., Du, B., Zhang, Y., Xu, C., Tao, D.: Iterative privileged learning. IEEE Trans. Neural Netw. Learn. Syst. 31, 2805–2817 (2020)

    Article  MathSciNet  Google Scholar 

  15. Nguyen, H., Vu, T., Vo, T.P., Thai, H.T.: Efficient machine learning models for prediction of concrete strengths. Constr. Build. Mater. 266(Part B), 120950 (2021)

    Google Scholar 

  16. Castán-Lascorz, M.A., Jimenez-Herrera, P., Troncoso, A., Asencio-Cortes, G.: A new hybrid method for predicting univariate and multivariate time series based on pattern forecasting. Inf. Sci. 586, 611–627 (2022)

    Article  Google Scholar 

  17. Hu, M., Li, W., Yan, K., Ji, Z., Hu, H.: Modern machine learning techniques for univariate tunnel settlement forecasting: a comparative study. Math. Probl. Eng., pp. 7057612–7057624 (2019)

    Google Scholar 

  18. Panigrahi, S., Behera, H.S.: A hybrid ETS-ANN model for time series forecasting. Eng. Appl. Artif. Intell. 66, 49–59 (2017)

    Article  Google Scholar 

  19. Ribeiro, M.H.D.M., da Silva, R.G., Moreno, S.R., Mariani, V.C., dos Santos Coelho, L.: Efficient bootstrap stacking ensemble learning model applied to wind power generation forecasting. Int. J. Electr. Pow. Energy Syst. 136, 107712 (2022)

    Google Scholar 

  20. Guo, X., Gao, Y., Zheng, D., Ning, Y., Zhao, Q.: Study on short-term photovoltaic power prediction model based on the stacking ensemble learning. Energy Rep. 6(Supplement 9), 1424–1431 (2020)

    Article  Google Scholar 

Download references

Acknowledgments

The work is partially supported by the National Natural Science Foundation of China (Nos. 62072088, U22A2025, 62232007, U23A20309), and Liaoning Provincial Science and Technology Plan Project - Key R&D Department of Science and Technology (No. 2023JH2/101300182) and 111 Project (No. B16009).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bin Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, H., Wang, B., Liu, S., Yang, X., Wang, J., Yu, S. (2025). Two-Stage Trained Stacking Model for Univariate Time Series Forecasting. In: Barhamgi, M., Wang, H., Wang, X. (eds) Web Information Systems Engineering – WISE 2024. WISE 2024. Lecture Notes in Computer Science, vol 15436. Springer, Singapore. https://doi.org/10.1007/978-981-96-0579-8_14

Download citation

  • DOI: https://doi.org/10.1007/978-981-96-0579-8_14

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-96-0578-1

  • Online ISBN: 978-981-96-0579-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics