Skip to main content
Log in

Variable sample size method for equality constrained optimization problems

  • Original Paper
  • Published:
Optimization Letters Aims and scope Submit manuscript

Abstract

An equality constrained optimization problem with a deterministic objective function and constraints in the form of mathematical expectation is considered. The constraints are transformed into the Sample Average Approximation form resulting in deterministic problem. A method which combines a variable sample size procedure with line search is applied to a penalty reformulation. The method generates a sequence that converges towards first-order critical points. The final stage of the optimization procedure employs the full sample and the SAA problem is eventually solved with significantly smaller cost. Preliminary numerical results show that the proposed method can produce significant savings compared to SAA method and some heuristic sample update counterparts while generating a solution of the same quality.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  1. Bastin, F.: Trust-Region Algorithms for Nonlinear Stochastic Programming and Mixed Logit Models. Ph.D. thesis, University of Namur, Belgium (2004)

  2. Bastin, F., Cirillo, C., Toint, P.L.: An adaptive Monte Carlo algorithm for computing mixed logit estimators. Comput. Manag. Sci. 3(1), 55–79 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  3. Deng, G., Ferris, M.C.: Variable-number sample path optimization. Math. Program. 117(12), 81–109 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  4. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  5. Dolgopolik, M.V.: Smooth exact penalty functions: a general approach. Optim. Lett. 10, 635–648 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  6. Friedlander, M.P., Schmidt, M.: Hybrid deterministic-stochastic methods for data fitting. SIAM J. Sci. Comput. 34(3), 13801405 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  7. Gill, P.E., Murray, W., Wright, M.H.: Practical Optimization. Academic Press, London (1997)

    MATH  Google Scholar 

  8. Hock, W., Schittkowski, K.: Test Examples for Nonilnear Programming Codes. Lecture Notes in Economics and Mathematical Systems, vol. 187. Springer, Berlin (1981)

  9. Homem-de-Mello, T.: Variable-sample methods for stochastic optimization. ACM Trans. Model. Comput. Simul. 13(2), 108–133 (2003)

    Article  Google Scholar 

  10. Huyer, W., Neumaier, A.: A new exact penalty function. SIAM J. Optim. 13(4), 1141–1158 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  11. Krejić, N., Krklec, N.: Line search methods with variable sample size for unconstrained optimization. J. Comput. Appl. Math. 245, 213–231 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  12. Krejić, N., Krklec Jerinkić, N.: Nonmonotone line search methods with variable sample size. Numer. Algorithms 68(4), 711–739 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  13. Nocedal, J., Wright, S.J.: Numerical Optimization. Springer Series in Operations Research. Springer, Berlin (1999)

    Google Scholar 

  14. Pasupathy, R.: On choosing parameters in retrospective-approximation algorithms for stochastic root finding and simulation optimization. Oper. Res. 58(4), 889–901 (2010)

    Article  MATH  Google Scholar 

  15. Polak, E., Royset, J.O.: Eficient sample sizes in stochastic nonlinear programing. J. Comput. Appl. Math. 217(2), 301–310 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  16. Shapiro, A.: Monte Carlo sampling methods. In: Stochastic Programming, Handbook in Operations Research and Management Science, vol. 10, pp. 353–425. Elsevier, Amsterdam (2003)

  17. Shapiro, A., Dentcheva, D., Ruszczynski, A.: Lectures on Stochastic Programming: Modeling and Theory. MPS-SIAM Series on Optimization (2009)

  18. Spall, J.C.: Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control. Wiley-Interscience Series in Discrete Mathematics, New Jersey (2003)

  19. Wang, X., Ma, S., Yuan, Y.: Penalty Methods with Stochastic Approximation for Stochastic Nonlinear Programming, Technical report. arXiv:1312.2690 [math.OC] (2015)

Download references

Acknowledgements

We are grateful to the Associate Editor and reviewers whose comments helped us to improve the paper. N. Krejić and N. Krklec Jerinkić are supported by Serbian Ministry of Education, Science and Technological Development, Grant No. 174030.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrea Rožnjik.

Appendix

Appendix

Algorithm 2

Step 0 :

Input parameters: \(dm_{k}\), \(\epsilon _{\delta }^{N_{k}}(x_{k})\), \(x_k\), \(N_k\), \(N_{k}^{min}\), \(\nu _{1}\in (0,1)\).

Step 1 :

Determine \(N_{k+1}\)

1) :

If \(dm_{k}=\epsilon _{\delta }^{N_{k}}(x_{k})\) set \(N_{k+1}=N_{k}\).

2) :

If \(dm_{k}>\epsilon _{\delta }^{N_{k}}(x_{k})\)

Starting with \(N=N_{k}\), while \(dm_{k}>\frac{N_k}{N}\;\epsilon _{\delta }^{N}(x_{k})\) and \(N>N_{k}^{min}\), decrease N by 1 and calculate \(\varepsilon _{\delta }^{N}(x_{k}). \) Set \(N_{k+1}=N\).

3) :

\(dm_{k}<\epsilon _{\delta }^{N_{k}}(x_{k})\)

i) :

If \(dm_{k}\ge \nu _{1} \epsilon _{\delta }^{N_{k}}(x_{k})\) Starting with \(N=N_{k}\), while \(dm_{k}<\frac{N_k}{N}\;\epsilon _{\delta }^{N}(x_{k})\) and \(N<N_{max}\), increase N by 1 and calculate \(\epsilon _{\delta }^{N}(x_{k})\). Set \(N_{k+1}=N\).

ii) :

If \(dm_{k}< \nu _{1}\epsilon _{\delta }^{N_{k}}(x_{k})\) set \(N_{k+1}=N_{max}\).

Algorithm 3

We say that we have not made big enough decrease of the function \(\hat{\theta }_{N_{k+1}}\) if the following inequality is true

$$\begin{aligned} \frac{\hat{\theta }_{N_{k+1}}(x_{l(k)})-\hat{\theta }_{N_{k+1}}(x_{k+1})}{k+1-l(k)}< \frac{N_{k+1}}{N_{max}}\,\epsilon _{\delta }^{N_{k+1}}(x_{k+1}), \end{aligned}$$

where l(k) is the iteration at which we started to use the sample size \(N_{k+1}\) for the last time.

Step 0 :

Input parameters: \(N_{k}\), \(N_{k+1}\), \(N_{k}^{min}\).

Step 1 :

Determine \(N_{k+1}^{min}\):

1) :

If \(N_{k+1}\le N_{k}\) then \(N_{k+1}^{min}=N_{k}^{min}\).

2) :

If \(N_{k+1}> N_{k}\) and

i) :

if \(N_{k+1}\) is a sample size which has not been used so far then \(N_{k+1}^{min}=N_{k}^{min}\).

ii) :

if \(N_{k+1}\) is a sample size which had been used and if we have made big enough decrease of the function \(\hat{\theta }_{N_{k+1}}\) since the last time we used it, then \(N_{k+1}^{min}=N_{k}^{min}\).

iii) :

if \(N_{k+1}\) is a sample size which had been used and if we have not made big enough decrease of the function \(\hat{\theta }_{N_{k+1}}\) since the last time we used it, then \(N_{k+1}^{min}=N_{k+1}\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Krejić, N., Krklec Jerinkić, N. & Rožnjik, A. Variable sample size method for equality constrained optimization problems. Optim Lett 12, 485–497 (2018). https://doi.org/10.1007/s11590-017-1143-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11590-017-1143-8

Keywords

Navigation