Abstract
Unconstrained optimization problems are typically solved using iterative methods, which often depend on line search techniques to determine optimal step lengths in each iteration. This paper introduces a novel line search approach. Traditional line search methods, aimed at determining optimal step lengths, often discard valuable data from the search process and focus on refining step length intervals. This paper proposes a more efficient method using Bayesian optimization, which utilizes all available data points, i.e., function values and gradients, to guide the search towards a potential global minimum. This new approach more effectively explores the search space, leading to better solution quality. It is also easy to implement and integrate into existing frameworks. Tested on the challenging CUTEst test set, it demonstrates superior performance compared to existing state-of-the-art methods, solving more problems to optimality with equivalent resource usage.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Armijo, L.: Minimization of functions having Lipschitz continuous first partial derivatives. Pac. J. Math. 16(1), 1–3 (1966)
Barzilai, J., Borwein, J.M.: Two-Point Step Size Gradient Methods. IMA J. Numer. Anal. 8(1), 141–148 (1988)
Giesen, J., Jaggi, M., Laue, S.: Regularization paths with guarantees for convex semidefinite optimization. In: International Conference on Artificial Intelligence and Statistics (AISTATS) (2012)
Giesen, J., Laue, S.: Combining ADMM and the augmented lagrangian method for efficiently handling many constraints. In: International Joint Conference on Artificial Intelligence (IJCAI) (2019)
Gould, N.I.M., Orban, D., Toint, P.L.: Cutest: a constrained and unconstrained testing environment with safe threads for mathematical optimization. Comput. Optim. Appl. 60(3), 545–557 (2015)
Hager, W.W., Zhang, H.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16(1), 170–192 (2005)
Laue, S., Mitterreiter, M., Giesen, J.: Geno – generic optimization for classical machine learning. In: Advances in Neural Information Processing Systems (NeurIPS), vol. 32 (2019)
Laue, S., Blacher, M., Giesen, J.: Optimization for classical machine learning problems on the GPU. In: Conference on Artificial Intelligence (AAAI) (2022)
Mockus, J.: On bayes methods for seeking an extremum. Avtomatika i Vychislitelnaja Technika 3, 53–62 (1972)
Moré, J.J., Thuente, D.J.: Line search algorithms with guaranteed sufficient decrease. ACM Trans. Math. Softw. 20(3), 286–307 (1994)
Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. MIT Press (2006)
Sra, S., Nowozin, S., Wright, S.J.: Optimization for Machine Learning. MIT Press (2012)
Virtanen, P., et aL.: SciPy 1.0 contributors: scipy 1.0: fundamental algorithms for scientific computing in python. Nat. Methods 17, 261–272 (2020)
Wächter, A., Biegler, L.T.: Line search filter methods for nonlinear programming: motivation and global convergence. SIAM J. Optim. 16(1), 1–31 (2005)
Wächter, A., Biegler, L.T.: On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Program. 106(1), 25–57 (2006)
Wolfe, P.: Convergence conditions for ascent methods. SIAM Rev. 11(2), 226–235 (1969)
Wu, J., Poloczek, M., Wilson, A.G., Frazier, P.: Bayesian optimization with gradients. In: Advances in Neural Information Processing Systems (NeurIPS) (2017)
Zhu, C., Byrd, R.H., Lu, P., Nocedal, J.: Algorithm 778: L-BFGS-B: fortran subroutines for large-scale bound-constrained optimization. ACM Trans. Math. Softw. 23(4), 550–560 (1997)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Disclosure of Interests
The authors have no competing interests to declare that are relevant to the content of this article.
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Prusina, T., Laue, S. (2025). Efficient Line Search Method Based on Regression and Uncertainty Quantification. In: Festa, P., Ferone, D., Pastore, T., Pisacane, O. (eds) Learning and Intelligent Optimization. LION 2024. Lecture Notes in Computer Science, vol 14990. Springer, Cham. https://doi.org/10.1007/978-3-031-75623-8_26
Download citation
DOI: https://doi.org/10.1007/978-3-031-75623-8_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-75622-1
Online ISBN: 978-3-031-75623-8
eBook Packages: Computer ScienceComputer Science (R0)