Skip to main content

Machine Learning Algorithms of Relaxation Subgradient Method with Space Extension

  • Conference paper
  • First Online:
Mathematical Optimization Theory and Operations Research (MOTOR 2021)

Abstract

In relaxation subgradient minimization methods, a descent direction, which is based on the subgradients obtained at the iteration, forms an obtuse angle with all subgradients in the neighborhood of the current minimum. Minimization along this direction enables us to go beyond this neighborhood and avoid method looping. To find the descent direction, we formulate a problem in a form of systems of inequalities and propose an algorithm with space extension close to the iterative least squares method for solving them. The convergence rate of the method is proportional to the valid value of the space extension parameter and limited by the characteristics of subgradient sets. Theoretical analysis of the learning algorithm with space extension enabled us to identify the components of the algorithm and alter them to use increased values of the extension parameter if possible. On this basis, we propose and substantiate a new learning method with space extension and corresponding subgradient method for nonsmooth minimization. Our computational experiment confirms their efficiency. Our approach can be used to develop new algorithms with space extension for relaxation subgradient minimization.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Lemarechal, C., Nemirovskii, A., Nesterov, Y.: New variants of bundle methods. Math. Program. 69, 111–147 (1995). https://doi.org/10.1007/BF01585555

    Article  MathSciNet  MATH  Google Scholar 

  2. Richtarik, P.: Approximate level method for non smooth convex minimization. J. Optim. Theory Appl. 152, 334–350 (2012)

    Article  MathSciNet  Google Scholar 

  3. Gasnikov, A.V., Nesterov, Y.E.: Universal method for stochastic composite optimization problems. Comput. Math. Math. Phys. 58, 48–64 (2018)

    Article  MathSciNet  Google Scholar 

  4. Shor, N.Z.: Nondifferentiable Optimization and Polynomial Problems. Springer Science, New York (1998). https://doi.org/10.1007/978-1-4757-6015-6

  5. Polyak, B.T.: The conjugate gradient method in extremal problems. USSR Comput. Math. Math. Phys. 9(4), 94–112 (1969)

    Article  Google Scholar 

  6. Polyak, B.T.: Introduction to Optimization. Optimization Software, New York (1987)

    Google Scholar 

  7. Lemarechal, C.: An extension of Davidon methods to non-differentiable problems. Math. Program. Study 3, 95–109 (1975)

    Article  MathSciNet  Google Scholar 

  8. Dem’yanov, V.F., Vasil’ev, L.V.: Non-differentiable Optimization. Springer-Verlag, New York (1985)

    Google Scholar 

  9. Nemirovsky, A.S., Yudin, D.B.: Problem Complexity and Method Efficiency in Optimization. Wiley, Chichester (1983)

    Google Scholar 

  10. Shor, N.Z.: Minimization methods for non-differentiable functions. In: Springer Series in Computational Mathematics, vol. 3. Springer, Heidelberg (1985). https://doi.org/10.1007/978-3-642-82118-9

  11. Polyak, B.T.: Optimization of non-smooth composed functions. USSR Comput. Math. Math. Phys. 9(3), 507–521 (1969)

    Article  Google Scholar 

  12. Krutikov, V.N., Samoilenko, N.S., Meshechkin, V.V.: On the properties of the method of minimization for convex functions with relaxation on the distance to extremum. Autom. Remote Control 80(1), 102–111 (2019)

    Article  MathSciNet  Google Scholar 

  13. Shalev-Shwartz, S.: Online Learning and Online Convex Optimization, Now Foundations and Trends (2012)

    Google Scholar 

  14. Krutikov, V.N., Petrova, T.V.: Relaxation method of minimization with space extension in the subgradient direction. Ekon. Mat. Met. 39(1), 106–119 (2003)

    MATH  Google Scholar 

  15. Cao, H., Song, Y., Khan, K.A.: Convergence of subtangent-based relaxations of nonlinear programs. Processes 7(4), 221 (2019)

    Article  Google Scholar 

  16. Krutikov, V.N., Gorskaya, T.A.: A family of relaxation subgradient methods with two-rank correction of metric matrices. Ekon. Mat. Met. 45(4), 37–80 (2009)

    Google Scholar 

  17. Nurminskii, E.A., Thien, D.: Method of conjugate subgradients with constrained memory. Autom. Remote Control 75(4), 646–656 (2014)

    Article  MathSciNet  Google Scholar 

  18. Neimark, J.I.: Perceptron and pattern recognition. In: Mathematical Models in Natural Science and Engineering. Foundations of Engineering Mechanics. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-47878-2_27

  19. Krutikov, V.N., Shkaberina, G.Sh., Zhalnin, M.N., Kazakovtsev, L.A.: New method of training two-layer sigmoidal neural networks with regularization. In: 2019 International Conference on Information Technologies (InfoTech), Varna, pp. 1–4 (2019). https://doi.org/10.1109/InfoTech.2019.8860890

  20. Amini, S., Ghaernmaghami, S.: Sparse autoencoders using non-smooth regularization. In: 26th European Signal Processing Conference (EUSIPCO), Rome, pp. 2000–2004 (2018). https://doi.org/10.23919/EUSIPCO.2018.8553217

  21. Tibshirani, R.J.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B (Methodol.) 58(1), 267–288 (1996)

    Google Scholar 

Download references

Acknowledgment

This study was supported by the Ministry of Science and Higher Education of the Russian Federation (Project FEFE-2020-0013).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lev A. Kazakovtsev .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Krutikov, V.N., Meshechkin, V.V., Kagan, E.S., Kazakovtsev, L.A. (2021). Machine Learning Algorithms of Relaxation Subgradient Method with Space Extension. In: Pardalos, P., Khachay, M., Kazakov, A. (eds) Mathematical Optimization Theory and Operations Research. MOTOR 2021. Lecture Notes in Computer Science(), vol 12755. Springer, Cham. https://doi.org/10.1007/978-3-030-77876-7_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-77876-7_32

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-77875-0

  • Online ISBN: 978-3-030-77876-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics