Abstract
Mathematical optimization is a fundamental tool in decision making. However, it is often difficult to obtain an accurate formulation of an optimization problem due to uncertain parameters. Machine learning frameworks are attractive to address this issue: we predict the uncertain parameters and then optimize the problem based on the prediction. Recently, end-to-end learning approaches to predict and optimize the successive problems have received attention in the field of both optimization and machine learning. In this paper, we focus on gradient boosting which is known as a powerful ensemble method, and develop the end-to-end learning algorithm with maximizing the performance on the optimization problems directly. Our algorithm extends the existing gradient-based optimization through implicit differentiation to the second-order optimization for efficiently learning gradient boosting. We also conduct computational experiments to analyze how the end-to-end approaches work well and show the effectiveness of our end-to-end approach.
T. Konishi—Supported by JSPS KAKENHI Grant Number 17K12743 and JP18H05291, Japan.
T. Fukunaga—Supported by JST PRESTO grant JPMJPR1759, Japan.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Amos, B., Kolter, Z.: OptNet: differentiable optimization as a layer in neural networks. In: Proceedings of the 34th International Conference on Machine Learning, pp. 136–145 (2017)
Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785–794 (2016)
Dauphin, Y.N., Pascanu, R., Gülçehre, Ç., Cho, K., Ganguli, S., Bengio, Y.: Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. Advances in Neural Information Processing Systems 27, 2933–2941 (2014)
Demirovic, E., et al.: An investigation into prediction + optimisation for the knapsack problem. In: Integration of Constraint Programming, Artificial Intelligence, and Operations Research, pp. 241–257 (2019)
Demirovic, E., Stuckey, P.J., Bailey, J., Chan, J., Leckie, C., Ramamohanarao, K., Guns, T.: Predict+optimise with ranking objectives: Exhaustively learning linear functions. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence, pp. 1078–1085 (2019)
Elmachtoub, A.N., Grigas, P.: Smart “predict, then optimize”. CoRR abs/1710.08005 (2017)
Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression: a statistical view of boosting. Ann. Stat. 28(2), 337–407 (2000)
Harper, F.M., Konstan, J.A.: The movielens datasets: History and context. ACM Trans. Interact. Intell. Syst. (TiiS) 5(4), 19:1–19:19 (2016)
Ito, S., Yabe, A., Fujimaki, R.: Unbiased objective estimation in predictive optimization. In: Proceedings of the 35th International Conference on Machine Learning, pp. 2181–2190 (2018)
Ke, G., et al.: LightGBM: a highly efficient gradient boosting decision tree. In: Advances in Neural Information Processing Systems, vol. 30, pp. 3146–3154 (2017)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: 3rd International Conference on Learning Representations (2015)
Magnus, J.R.: On the concept of matrix derivative. J. Multivar. Anal. 101(9), 2200–2206 (2010)
Magnus, J.R., Neudecker, H.: Matrix Differential Calculus with Applications in Statistics and Econometrics. Wiley (1988)
Mandi, J., Demirovic, E., Stuckey, P.J., Guns, T.: Smart predict-and-optimize for hard combinatorial optimization problems. CoRR abs/1911.10092 (2019). (to apppear in proceedings of AAAI 2020)
Martens, J., Sutskever, I.: Training deep and recurrent networks with hessian-free optimization. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 7700, pp. 479–535. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-35289-8_27
McAuley, J.J., Leskovec, J.: Learning to discover social circles in ego networks. In: Advances in Neural Information Processing Systems, vol. 25, pp. 548–556 (2012)
Wilder, B., Dilkina, B., Tambe, M.: Melding the data-decisions pipeline: decision-focused learning for combinatorial optimization. In: The Thirty-Third AAAI Conference on Artificial Intelligence, pp. 1658–1665 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Konishi, T., Fukunaga, T. (2021). End-to-End Learning for Prediction and Optimization with Gradient Boosting. In: Hutter, F., Kersting, K., Lijffijt, J., Valera, I. (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2020. Lecture Notes in Computer Science(), vol 12459. Springer, Cham. https://doi.org/10.1007/978-3-030-67664-3_12
Download citation
DOI: https://doi.org/10.1007/978-3-030-67664-3_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-67663-6
Online ISBN: 978-3-030-67664-3
eBook Packages: Computer ScienceComputer Science (R0)