Skip to main content

A Two-Stage Exact Algorithm for Optimization of Neural Network Ensemble

  • Conference paper
  • First Online:
Integration of Constraint Programming, Artificial Intelligence, and Operations Research (CPAIOR 2021)

Abstract

We study optimization problems where the objective function is modeled through feedforward neural networks. Recent literature has explored the use of a single neural network to model either uncertain or complex elements within an objective function. However, it is well known that ensembles can produce more accurate and more stable predictions than single neural network. We therefore study how neural network ensemble can be incorporated within an objective function, and propose a two-stage optimization algorithm for solving the ensuing optimization problem. Preliminary computational results applied to a global optimization problem and a real-world data set show that the two-stage model greatly outperforms a standard adaptation of previously proposed MIP formulations of single neural network embedded optimization models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Anderson, R., Huchette, J., Ma, W., Tjandraatmadja, C., Vielma, J.P.: Strong mixed-integer programming formulations for trained neural networks. Math. Program. 183, 3–39 (2020). https://doi.org/10.1007/s10107-020-01474-5

    Article  MathSciNet  MATH  Google Scholar 

  2. Bartolini, A., Lombardi, M., Milano, M., Benini, L.: Neuron constraints to model complex real-world problems. In: Lee, J. (ed.) CP 2011. LNCS, vol. 6876, pp. 115–129. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-23786-7_11

    Chapter  Google Scholar 

  3. Bartolini, A., Lombardi, M., Milano, M., Benini, L.: Optimization and controlled systems: a case study on thermal aware workload dispatching. In: AAAI (2012). http://www.aaai.org/ocs/index.php/AAAI/AAAI12/paper/view/5042

  4. Bergman, D., Huang, T., Brooks, P., Lodi, A., Raghunathan, A.U.: JANOS: an integrated predictive and prescriptive modeling framework (2019)

    Google Scholar 

  5. Carøe, C.C., Schultz, R.: Dual decomposition in stochastic integer programming. Oper. Res. Lett. 24(1–2), 37–45 (1999)

    Article  MathSciNet  Google Scholar 

  6. Cheng, C.-H., Nührenberg, G., Ruess, H.: Maximum resilience of artificial neural networks. In: D’Souza, D., Narayan Kumar, K. (eds.) ATVA 2017. LNCS, vol. 10482, pp. 251–268. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68167-2_18

    Chapter  Google Scholar 

  7. Dietterich, T.G.: Ensemble methods in machine learning. In: Kittler, J., Roli, F. (eds.) MCS 2000. LNCS, vol. 1857, pp. 1–15. Springer, Heidelberg (2000). https://doi.org/10.1007/3-540-45014-9_1

    Chapter  Google Scholar 

  8. Dutta, S., Jha, S., Sankaranarayanan, S., Tiwari, A.: Output range analysis for deep feedforward neural networks. In: Dutle, A., Muñoz, C., Narkawicz, A. (eds.) NFM 2018. LNCS, vol. 10811, pp. 121–138. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-77935-5_9

    Chapter  Google Scholar 

  9. Fischetti, M., Jo, J.: Deep neural networks and mixed integer linear optimization. Constraints 23(3), 296–309 (2018). https://doi.org/10.1007/s10601-018-9285-6

    Article  MathSciNet  MATH  Google Scholar 

  10. L Gurobi Optimization: Gurobi optimizer reference manual (2018). http://www.gurobi.com

  11. Hansen, L.K., Salamon, P.: Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 12(10), 993–1001 (1990)

    Article  Google Scholar 

  12. Kourentzes, N., Barrow, D.K., Crone, S.F.: Neural network ensemble operators for time series forecasting. Expert Syst. Appl. 41(9), 4235–4244 (2014)

    Article  Google Scholar 

  13. Kuhn, M., Johnson, K.: Appliedpredictivemodeling: functions and data sets for ‘applied predictie modeling’ (2014). https://cran.r-project.org/web/packages/AppliedPredictiveModeling/index.html

  14. Mišić, V.V.: Optimization of tree ensembles. Oper. Res. 68(5), 1605–1624 (2020)

    Article  MathSciNet  Google Scholar 

  15. Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)

    MathSciNet  MATH  Google Scholar 

  16. Schweidtmann, A.M., Mitsos, A.: Deterministic global optimization with artificial neural networks embedded. J. Optim. Theory Appl. 180(3), 925–948 (2018). https://doi.org/10.1007/s10957-018-1396-0

    Article  MathSciNet  MATH  Google Scholar 

  17. Serra, T., Kumar, A., Ramalingam, S.: Lossless compression of deep neural networks. arXiv preprint arXiv:2001.00218 (2020)

  18. Serra, T., Tjandraatmadja, C., Ramalingam, S.: Bounding and counting linear regions of deep neural networks. In: International Conference on Machine Learning, pp. 4558–4566. PMLR (2018)

    Google Scholar 

  19. Tjeng, V., Xiao, K., Tedrake, R.: Evaluating robustness of neural networks with mixed integer programming. In: 7th International Conference on Learning Representations, ICLR 2019, pp. 1–21 (2019)

    Google Scholar 

  20. West, D., Dellana, S., Qian, J.: Neural network ensemble strategies for financial decision applications. Comput. Oper. Res. 32(10), 2543–2559 (2005)

    Article  Google Scholar 

  21. Yeh, I.C.: Modeling of strength of high-performance concrete using artificial neural networks. Cem. Concr. Res. 28(12), 1797–1808 (1998)

    Article  Google Scholar 

  22. Zhou, Z.H.: Ensemble Methods: Foundations and Algorithms. CRC Press, Boco Raton (2012)

    Book  Google Scholar 

  23. Zhou, Z.H., Wu, J., Tang, W.: Ensembling neural networks: many could be better than all. Artif. Intell. 137(1–2), 239–263 (2002). https://doi.org/10.1016/S0004-3702(02)00190-X

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Keliang Wang , Leonardo Lozano , David Bergman or Carlos Cardonha .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, K., Lozano, L., Bergman, D., Cardonha, C. (2021). A Two-Stage Exact Algorithm for Optimization of Neural Network Ensemble. In: Stuckey, P.J. (eds) Integration of Constraint Programming, Artificial Intelligence, and Operations Research. CPAIOR 2021. Lecture Notes in Computer Science(), vol 12735. Springer, Cham. https://doi.org/10.1007/978-3-030-78230-6_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-78230-6_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-78229-0

  • Online ISBN: 978-3-030-78230-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics