Skip to main content
Log in

Implementation of reduced gradient with bisection algorithms for non-convex optimization problem via stochastic perturbation

  • Original Paper
  • Published:
Numerical Algorithms Aims and scope Submit manuscript

Abstract

In this paper, we proposed an implementation of stochastic perturbation of reduced gradient and bisection (SPRGB) method for optimizing a non-convex differentiable function subject to linear equality constraints and non-negativity bounds on the variables. In particular, at each iteration, we compute a search direction by reduced gradient, and optimal line search by bisection algorithm along this direction yields a decrease in the objective value. SPRGB method is desired to establish the global convergence of the algorithm. An implementation and tests of SPRGB algorithm are given, and some numerical results of large-scale problems are presented, which show the efficient of this approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Avramidis, A. N., Chan, W., L’Ecuyer, P.: Staffing multi-skill call centers via search methods and a performance approximation. IIE Trans. 41(6), 483–497 (2009)

    Article  Google Scholar 

  2. Baushev, A. N., Morozova, E. Y.: A multidimensional bisection method for minimizing function over simplex. Lectures Notes in Engineering and Computer Science 2, 801–803 (2007)

    Google Scholar 

  3. Bazaraa, M. S., Sherali, H. D., Shetty, C. M.: Nonlinear programming theory and application. 3rd edn. Wiley-Interscience (2006)

  4. Beck, P., Lasdon, L., Engquist, M.: A Reduced gradient algorithm for nonlinear network problems. ACM Trans. Math. Softw. 9, 57–70 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  5. Bihain, A, Nguyen, V H, Strodiot, J. J.: Reduced subgradient algorithm. Math. Program. Study. 30, 127–149 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  6. Bouhadi, M., Ellaia, R., Souza de Cursi, J. E.: Random perturbations of the projected gradient for linearly constrained problems. Advances in convex analysis and global optimization. Honoring the memory of C. Caratheodory (1873–1950). Dordrecht, Kluwer Academic Publishers. Nonconvex Optim. Appl. 54, 487–499 (2001)

    Article  Google Scholar 

  7. Carson, Y., Maria, A.: Simulation optimization: methods and applications Proceeding of Winter Simulation Conference, USA (1997)

    Google Scholar 

  8. Che, H., Li, C., He, X., Huang, T.: An intelligent method of swarm neural networks for equalities constrained nonconvex optimization. Neurocomputing 167, 569–577 (2015)

    Article  Google Scholar 

  9. Dembo, R. S.: Dealing with degeneracy in reduced gradient algorithms. Math. Program. 31, 375–363 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  10. Dorea, C. C. Y.: Stopping rules for a random optimization method. SIAM J. Control Optim. 28(4), 841–850 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  11. El Mouatasim, A., Ellaia, R., Al-Hossain, A.: A continuous approach to combinatorial optimization: application to water system pump operations. Optim. Lett. J. 6(1), 177–198 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  12. El Mouatasim, A., Ellaia, R., Souza de Cursi, J. E.: Stochastic perturbation of reduced gradient & GRG methods for nonconvex programming problems. Int. J. Appl. Math. Comput. 226, 198–211 (2014)

    MathSciNet  MATH  Google Scholar 

  13. Floudas, C. A., Pardalos, P. M.: A collection of test problems for constrained global optimization algorithms. Lecture Notes in Computer Science, vol. 455. Springer-Verlag, Berlin (1990)

  14. Gould, N. I.: On modified factorization for large-scale linearly constrained optimization. SIAM J. Optim. 9(4), 10441–1063 (1999)

    Article  MathSciNet  Google Scholar 

  15. Griva, I., Nash, S. G., Sofer, A.: Linear and nonlinear optimization. SIAM ISBN 978-0-898716-61-0 (2009)

  16. Hock, W., Schittkowski, K.: Test examples for nonlinear programming codes, Lecture Notes in Economics and Mathematical Systems, 187, Springer (1981)

  17. Horst, T.: Modification, implementation and comparison of three algorithms for globally solving linearly constrained concave minimization problems. Computing 42, 271–289 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  18. Huard, P.: Un algorithme général de gradient réduit [Ageneral reduced gradient algorithm]. Bulletin de la Direction des Etudes et Recherches, electricité de France, Sé,rie C. 2 91–109 (1982)

  19. Kotel’nikov, E. A.: Applying a reduced gradient in quadratic programming. Numer. Anal. Appl. 3(1), 17–24 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  20. L’Ecuyer, P., Touzin, R: On the Deng-Lin random number generators and related methods. Stat. Comput. 14(1), 5–9 (2003)

    Article  MathSciNet  Google Scholar 

  21. Lacoste-Julien, S.: Convergence rate of Frank-Wolfe for non-convex objectives. arXiv:1607.00345v1 (2016)

  22. Levy, A., Montalvo, A: The tunneling algorithm for the global minimization of functions Proceedings of the Conference on Numerical Analysis, Dundee, Scotland (1977)

    Google Scholar 

  23. Li, C., Li, D.: An extension of the Fletcher–Reeves method to linear equality constrained optimization problem. Appl. Math. Comput. 219, 10909–10914 (2013)

    MathSciNet  MATH  Google Scholar 

  24. Masden, K., Schjaer-Jacobsen, H.: Linearly constrained minimax optimization. Math. Program. 14, 208–223 (1978)

    Article  MathSciNet  MATH  Google Scholar 

  25. Nanuclef, R., Frandi, E., Sartori, C., Allende, H.: A novel Frank–Wolfe algorithm. Analysis and applications to large-scale SVM training. Inform. Sci. 285, 66–99 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  26. Nikolaou, N.: Fast Optimization of Non-Convex Machine Learning Objectives. Master thesis, University of Edinburgh (2012)

  27. Sadegh, P., Spall, J. C.: Optimal random perturbations for stochastic approximation using a simultaneous perturbation gradient approximation. IEEE Trans. Autom. Control 44(1), 231–232 (1999)

    Article  MATH  Google Scholar 

  28. Sumathi, P., Paulraj. S.: Identification of redundant constraints in large scale linear programming problems with minimal computational effort. Appl. Math Sci. 77(80), 3963–3974 (2013)

    MathSciNet  Google Scholar 

  29. Wang, F., Su, C., Liu, Y.: Computation of optimal feedforward and feedback control by a modified reduced gradient method. Appl. Math. Comput. 104, 85–100 (1999)

    MathSciNet  MATH  Google Scholar 

  30. Wolfe, P: The reduced gradient method. Rand Document. Santa Monica, CA (1962)

  31. Ziadi, R., Ellaia, R., Bencherif-Madani, A.: Global optimization through a stochastic perturbation of the Polack-Ribière conjugate gradient method. J. Comput. Appl. Math. 317, 672–684 (2017)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We are indebted to the anonymous reviewers and editor for many suggestions and stimulating comments to improve the original manuscript. Also, the author thank Professor Pierre L’Ecuyer in Montreal University, that gave several insightful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abdelkrim El Mouatasim.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mouatasim, A.E. Implementation of reduced gradient with bisection algorithms for non-convex optimization problem via stochastic perturbation. Numer Algor 78, 41–62 (2018). https://doi.org/10.1007/s11075-017-0366-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11075-017-0366-1

Keywords

Navigation