Skip to main content
Log in

A regularization method for constrained nonlinear least squares

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

We propose a regularization method for nonlinear least-squares problems with equality constraints. Our approach is modeled after those of Arreckx and Orban (SIAM J Optim 28(2):1613–1639, 2018. https://doi.org/10.1137/16M1088570) and Dehghani et al. (INFOR Inf Syst Oper Res, 2019. https://doi.org/10.1080/03155986.2018.1559428) and applies a selective regularization scheme that may be viewed as a reformulation of an augmented Lagrangian. Our formulation avoids the occurrence of the operator \(A(x)^T A(x)\), where A is the Jacobian of the nonlinear residual, which typically contributes to the density and ill conditioning of subproblems. Under boundedness of the derivatives, we establish global convergence to a KKT point or a stationary point of an infeasibility measure. If second derivatives are Lipschitz continuous and a second-order sufficient condition is satisfied, we establish superlinear convergence without requiring a constraint qualification to hold. The convergence rate is determined by a Dennis–Moré-type condition. We describe our implementation in the Julia language, which supports multiple floating-point systems. We illustrate a simple progressive scheme to obtain solutions in quadruple precision. Because our approach is similar to applying an SQP method with an exact merit function on a related problem, we show that our implementation compares favorably to IPOPT in IEEE double precision.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. julialang.org.

  2. www.mpfr.org.

  3. github.com/NLPModelsIpopt.jl.

  4. github.com/JuliaDiff/ForwardDiff.jl.

  5. https://github.com/JuliaMath/DecFP.jl.

  6. https://software.intel.com/en-us/articles/intel-decimal-floating-point-math-library.

  7. https://github.com/JuliaMath/DoubleDouble.jl.

References

  1. Andreani, R., Martínez, J., Ramos, A., Silva, P.: A cone-continuity constraint qualification and algorithmic consequences. SIAM J. Optim. 26(1), 96–110 (2016). https://doi.org/10.1137/15M1008488

    Article  MathSciNet  MATH  Google Scholar 

  2. Ansari, M.R., Mahdavi-Amiri, N.: A robust combined trust region-line search exact penalty projected structured scheme for constrained nonlinear least squares. Optim. Methods Softw. 30(1), 162–190 (2014). https://doi.org/10.1080/10556788.2014.909970

    Article  MathSciNet  MATH  Google Scholar 

  3. Armand, P., Omheni, R.: A globally and quadratically convergent primal-dual augmented Lagrangian algorithm for equality constrained optimization. Optim. Methods Softw. 32(1), 1–21 (2015). https://doi.org/10.1080/10556788.2015.1025401

    Article  MathSciNet  MATH  Google Scholar 

  4. Armand, P., Benoist, J., Orban, D.: From global to local convergence of interior methods for nonlinear optimization. Optim. Methods Softw. 28(5), 1051–1080 (2013). https://doi.org/10.1080/10556788.2012.668905

    Article  MathSciNet  MATH  Google Scholar 

  5. Armand, P., Benoist, J., Omheni, R., Pateloup, V.: Study of a primal-dual algorithm for equality constrained minimization. Comput. Optim. Appl. 59(3), 405–433 (2014). https://doi.org/10.1007/s10589-014-9679-3

    Article  MathSciNet  MATH  Google Scholar 

  6. Arreckx, S., Orban, D.: A regularized factorization-free method for equality-constrained optimization. SIAM J. Optim. 28(2), 1613–1639 (2018). https://doi.org/10.1137/16M1088570

    Article  MathSciNet  MATH  Google Scholar 

  7. Bartholomew-Biggs, M.C., Hernandez, M.D.F.G.: Using the KKT matrix in an augmented Lagrangian SQP method for sparse constrained optimization. J. Optim. Theory Appl. 85(1), 201–220 (1995). https://doi.org/10.1007/BF02192305

    Article  MathSciNet  MATH  Google Scholar 

  8. Bellavia, S., Gurioli, G., Morini, B., Toint, Ph.L.: Adaptive regularization algorithms with inexact evaluations for nonconvex optimization. Technical report, University of Namur, (2018). URL http://www.optimization-online.org/DB_HTML/2018/11/6919.html

  9. Bidabadi, N.: Using a spectral scaling structured BFGS method for constrained nonlinear least squares. Optim. Methods Softw. (2017). https://doi.org/10.1080/10556788.2017.1385073

    Article  MathSciNet  MATH  Google Scholar 

  10. Bidabadi, N., Mahdavi-Amiri, N.: Superlinearly convergent exact penalty methods with projected structured secant updates for constrained nonlinear least squares. J. Optim. Theory Appl. 162(1), 154–190 (2013). https://doi.org/10.1007/s10957-013-0438-x

    Article  MathSciNet  MATH  Google Scholar 

  11. Davis, T.A.: Algorithm 849: a concise sparse Cholesky factorization package. ACM Trans. Math. Softw. 31(4), 587–591 (2005). https://doi.org/10.1145/1114268.1114277

    Article  MathSciNet  MATH  Google Scholar 

  12. Dehghani, M., Lambe, A., Orban, D.: A regularized interior-point method for constrained linear least squares. INFOR: Inf. Syst. Oper. Res. (2019). https://doi.org/10.1080/03155986.2018.1559428

    Article  Google Scholar 

  13. Dekker, T.J.: A floating-point technique for extending the available precision. Numer. Math. 18(3), 224–242 (1971). https://doi.org/10.1007/BF01397083

    Article  MathSciNet  MATH  Google Scholar 

  14. Dennis, J.E., Moré, J.J.: Quasi-Newton methods, motivation and theory. SIAM Rev. 19(1), 46–89 (1977). https://doi.org/10.1137/1019005

    Article  MathSciNet  MATH  Google Scholar 

  15. Dennis, J.E., Walker, H.F.: Convergence theorems for least-change secant update methods. SIAM J. Numer. Anal. 18(6), 949–987 (1981)

    Article  MathSciNet  Google Scholar 

  16. Dennis Jr., J.E., Gay, D.M., Welsch, R.E.: An adaptive nonlinear least-squares algorithm. ACM Trans. Math. Softw. 7(3), 348–368 (1981). https://doi.org/10.1145/355958.355965

    Article  MATH  Google Scholar 

  17. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profile. Math. Program. Ser. A 91(2), 201–213 (2002). https://doi.org/10.1007/s101070100263

    Article  MathSciNet  MATH  Google Scholar 

  18. Duff, I.S.: MA57—a code for the solution of sparse symmetric definite and indefinite systems. ACM Trans. Math. Softw. 30(2), 118–144 (2004). https://doi.org/10.1145/992200.992202

    Article  MathSciNet  MATH  Google Scholar 

  19. Dunning, I., Huchette, J., Lubin, M.: JuMP: a modeling language for mathematical optimization. SIAM Rev. 59(2), 295–320 (2017). https://doi.org/10.1137/15M1020575

    Article  MathSciNet  MATH  Google Scholar 

  20. Fernanda, M., Costa, P., Fernandes, E.M.G.P.: A primal-dual interior-point algorithm for nonlinear least squares constrained problems. Top 13(1), 145–166 (2005). https://doi.org/10.1007/bf02578992

    Article  MathSciNet  MATH  Google Scholar 

  21. Fernández, D., Solodov, M.: Stabilized sequential quadratic programming: a survey. Pesq. Oper. 34(3), 463–479 (2014). https://doi.org/10.1590/0101-7438.2014.034.03.0463

    Article  Google Scholar 

  22. Fernández, D., Pilotta, E.A., Torres, G.A.: An inexact restoration strategy for the globalization of the sSQP method. Comput. Optim. Appl. 54(3), 595–617 (2012). https://doi.org/10.1007/s10589-012-9502-y

    Article  MathSciNet  MATH  Google Scholar 

  23. Forsgren, A.: Inertia-controlling factorizations for optimization algorithms. Appl. Numer. Math. 43(1–2), 91–107 (2002). https://doi.org/10.1016/s0168-9274(02)00119-8

    Article  MathSciNet  MATH  Google Scholar 

  24. Friedlander, M.P., Orban, D.: A primal-dual regularized interior-point method for convex quadratic programs. Math. Program. Comp. 4(1), 71–107 (2012). https://doi.org/10.1007/s12532-012-0035-2

    Article  MathSciNet  MATH  Google Scholar 

  25. Gill, P.E., Kungurtsev, V., Robinson, D.P.: A stabilized SQP method: superlinear convergence. Math. Program. 163(1–2), 369–410 (2016). https://doi.org/10.1007/s10107-016-1066-7

    Article  MathSciNet  MATH  Google Scholar 

  26. Gill, P.E., Kungurtsev, V., Robinson, D.P.: A stabilized SQP method: global convergence. IMA J. Numer. Anal. 37(1), 407–443 (2016). https://doi.org/10.1093/imanum/drw004

    Article  MathSciNet  MATH  Google Scholar 

  27. Gould, N.I., Orban, D., Toint, Ph.L.: CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization. Comput. Optim. Appl. 60(3), 545–557 (2015). https://doi.org/10.1007/s10589-014-9687-3

    Article  MathSciNet  MATH  Google Scholar 

  28. Gould, N.I.M.: On the accurate determination of search directions for simple differentiable penalty functions. IMA J. Numer. Anal. 6(3), 357–372 (1986). https://doi.org/10.1093/imanum/6.3.357

    Article  MathSciNet  MATH  Google Scholar 

  29. Gratton, S., Toint, Ph.L.: A note on solving nonlinear optimization problems in variable precision. Technical report, University of Namur, (2018). URL http://www.optimization-online.org/DB_HTML/2018/12/6982.html

  30. Gratton, S., Simon, E., Toint, Ph.L.: Minimizing convex quadratic with variable precision Krylov methods. Technical report, University of Namur, (2018). arXiv:1807.07476

  31. Gupta, S., Agrawal, A., Gopalakrishnan, K., Narayanan, P.: Deep learning with limited numerical precision. In: Proceedings of the 32nd International Conference on International Conference on Machine Learning, vol. 37 of ICML’15, pp. 1737–1746. JMLR.org (2015). http://dl.acm.org/citation.cfm?id=3045118.3045303

  32. Hock, W., Schittkowski, K.: Test Examples for Nonlinear Programming Codes. Springer, Berlin (1981). https://doi.org/10.1007/978-3-642-48320-2

    Book  MATH  Google Scholar 

  33. HSL. The HSL Mathematical Software Library. STFC Rutherford Appleton Laboratory (2007). http://www.hsl.rl.ac.uk

  34. Izmailov, A.F., Solodov, M.V., Uskov, E.I.: Combining stabilized SQP with the augmented Lagrangian algorithm. Comput. Optim. Appl. 62(2), 405–429 (2015). https://doi.org/10.1007/s10589-015-9744-6

    Article  MathSciNet  MATH  Google Scholar 

  35. Izmailov, A.F., Solodov, M.V., Uskov, E.I.: Globalizing stabilized sequential quadratic programming method by smooth primal-dual exact penalty function. J. Optim. Theory Appl. 169(1), 148–178 (2016). https://doi.org/10.1007/s10957-016-0889-y

    Article  MathSciNet  MATH  Google Scholar 

  36. Li, Z., Osborne, M., Prvan, T.: Adaptive algorithm for constrained least-squares problems. J. Optim. Theory Appl. 114(2), 423–441 (2002). https://doi.org/10.1023/A:1016043919978

    Article  MathSciNet  MATH  Google Scholar 

  37. Li, Z.F.: On limited memory SQP methods for large scale constrained nonlinear least squares problems. ANZIAM J. 42, 900 (2000). https://doi.org/10.21914/anziamj.v42i0.627

    Article  MathSciNet  Google Scholar 

  38. Lukšan, L., Vlček, J.: Sparse and partially separable test problems for unconstrained and equality constrained optimization. Technical Report V-767, Prague: ICS AS CR (1999)

  39. Ma, D., Saunders, M.A.: Solving multiscale linear programs using the simplex method in quadruple precision. In: Numerical analysis and optimization, pp. 223–235. Springer International Publishing (2015). https://doi.org/10.1007/978-3-319-17689-5_9

  40. Mahdavi-Amiri, N., Ansari, M.R.: A superlinearly convergent penalty method with nonsmooth line search for constrained nonlinear least squares. Sultan Qaboos Univ. J. Sci. [SQUJS] 17(1), 103 (2012). https://doi.org/10.24200/squjs.vol17iss1pp103-124

    Article  Google Scholar 

  41. Mahdavi-Amiri, N., Bartels, R.H.: Constrained nonlinear least squares: an exact penalty approach with projected structured quasi-Newton updates. ACM Trans. Math. Softw. 15(3), 220–242 (1989). https://doi.org/10.1145/66888.66891

    Article  MathSciNet  MATH  Google Scholar 

  42. Mahdavi-Amiri, N., Bidabadi, N.: Constrained nonlinear least squares: a superlinearly convergent projected structured secant method. Int. J. Electr. Comput. Syst. (2012). https://doi.org/10.11159/ijecs.2012.001

    Article  MATH  Google Scholar 

  43. Martínez, J.M., Santos, L.T.: Some new theoretical results on recursive quadratic programming algorithms. J. Optim. Theory Appl. 97, 435–454 (1998). https://doi.org/10.1023/A:1022686919295

    Article  MathSciNet  MATH  Google Scholar 

  44. Micikevicius, P., Narang, S., Alben, J., Diamos, G.F., Elsen, E., García, D., Ginsburg, B., Houston, M., Kuchaiev, O., Venkatesh, G., Wu, H.: Mixed precision training. In: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30–May 3, 2018, Conference Track Proceedings (2018). https://openreview.net/forum?id=r1gs9JgRZ

  45. Moré, J.J., Garbow, B.S., Hillstrom, K.E.: Testing unconstrained optimization software. ACM Trans. Math. Softw. 7(1), 17–41 (1981). https://doi.org/10.1145/355934.355936

    Article  MathSciNet  MATH  Google Scholar 

  46. Orban, D., Arioli, M.: Iterative Solution of Symmetric Quasi-Definite Linear Systems. SIAM Spotlights. Society for Industrial and Applied Mathematics Philadelphia (2017). https://doi.org/10.1137/1.9781611974737

  47. Orban, D., Siqueira, A.S.: JuliaSmoothOptimizers, a framework for development of nonlinear optimization methods in Julia. Technical report, Les Cahiers du GERAD (2020). In preparation

  48. Schittkowski, K.: Solving constrained nonlinear least squares problems by a general purpose SQP-method. In: Hoffmann, K.-H., Zowe, J., Hiriart-Urruty, J.-B., Lemaréchal, C. (eds.) Trends in Mathematical Optimization: 4th French–German Conference on Optimization, pp. 295–309. Birkhäuser, Basel (1988). https://doi.org/10.1007/978-3-0348-9297-1_19. ISBN 978-3-0348-9297-1

    Chapter  Google Scholar 

  49. Schittkowski, K.: NLPLSQ: a Fortran implementation of an SQP-Gauss-Newton algorithm for least squares optimization. Technical report, Department of Computer Science, University of Bayreuth (2007)

  50. Vanderbei, R.J.: Symmetric quasidefinite matrices. SIAM J. Optim. 5(1), 100–113 (1995). https://doi.org/10.1137/0805005

    Article  MathSciNet  MATH  Google Scholar 

  51. Wächter, A., Biegler, L.T.: On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Program. 106(1), 25–57 (2006). https://doi.org/10.1007/s10107-004-0559-y

    Article  MathSciNet  MATH  Google Scholar 

  52. Wright, S.J.: Superlinear convergence of a stabilized SQP method to a degenerate solution. Comput. Optim. Appl. 11(3), 253–275 (1998). https://doi.org/10.1023/A:1018665102534

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We are very grateful to two anonymous referees for numerous comments and suggestions that greatly improved the presentation of this work.

Funding

Dominique Orban: Research partially supported by an NSERC Discovery Grant.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abel Soares Siqueira.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 232 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Orban, D., Siqueira, A.S. A regularization method for constrained nonlinear least squares. Comput Optim Appl 76, 961–989 (2020). https://doi.org/10.1007/s10589-020-00201-2

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-020-00201-2

Keywords

Navigation