Abstract
In relaxation subgradient minimization methods, a descent direction, which is based on the subgradients obtained at the iteration, forms an obtuse angle with all subgradients in the neighborhood of the current minimum. Minimization along this direction enables us to go beyond this neighborhood and avoid method looping. To find the descent direction, we formulate a problem in a form of systems of inequalities and propose an algorithm with space extension close to the iterative least squares method for solving them. The convergence rate of the method is proportional to the valid value of the space extension parameter and limited by the characteristics of subgradient sets. Theoretical analysis of the learning algorithm with space extension enabled us to identify the components of the algorithm and alter them to use increased values of the extension parameter if possible. On this basis, we propose and substantiate a new learning method with space extension and corresponding subgradient method for nonsmooth minimization. Our computational experiment confirms their efficiency. Our approach can be used to develop new algorithms with space extension for relaxation subgradient minimization.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Lemarechal, C., Nemirovskii, A., Nesterov, Y.: New variants of bundle methods. Math. Program. 69, 111–147 (1995). https://doi.org/10.1007/BF01585555
Richtarik, P.: Approximate level method for non smooth convex minimization. J. Optim. Theory Appl. 152, 334–350 (2012)
Gasnikov, A.V., Nesterov, Y.E.: Universal method for stochastic composite optimization problems. Comput. Math. Math. Phys. 58, 48–64 (2018)
Shor, N.Z.: Nondifferentiable Optimization and Polynomial Problems. Springer Science, New York (1998). https://doi.org/10.1007/978-1-4757-6015-6
Polyak, B.T.: The conjugate gradient method in extremal problems. USSR Comput. Math. Math. Phys. 9(4), 94–112 (1969)
Polyak, B.T.: Introduction to Optimization. Optimization Software, New York (1987)
Lemarechal, C.: An extension of Davidon methods to non-differentiable problems. Math. Program. Study 3, 95–109 (1975)
Dem’yanov, V.F., Vasil’ev, L.V.: Non-differentiable Optimization. Springer-Verlag, New York (1985)
Nemirovsky, A.S., Yudin, D.B.: Problem Complexity and Method Efficiency in Optimization. Wiley, Chichester (1983)
Shor, N.Z.: Minimization methods for non-differentiable functions. In: Springer Series in Computational Mathematics, vol. 3. Springer, Heidelberg (1985). https://doi.org/10.1007/978-3-642-82118-9
Polyak, B.T.: Optimization of non-smooth composed functions. USSR Comput. Math. Math. Phys. 9(3), 507–521 (1969)
Krutikov, V.N., Samoilenko, N.S., Meshechkin, V.V.: On the properties of the method of minimization for convex functions with relaxation on the distance to extremum. Autom. Remote Control 80(1), 102–111 (2019)
Shalev-Shwartz, S.: Online Learning and Online Convex Optimization, Now Foundations and Trends (2012)
Krutikov, V.N., Petrova, T.V.: Relaxation method of minimization with space extension in the subgradient direction. Ekon. Mat. Met. 39(1), 106–119 (2003)
Cao, H., Song, Y., Khan, K.A.: Convergence of subtangent-based relaxations of nonlinear programs. Processes 7(4), 221 (2019)
Krutikov, V.N., Gorskaya, T.A.: A family of relaxation subgradient methods with two-rank correction of metric matrices. Ekon. Mat. Met. 45(4), 37–80 (2009)
Nurminskii, E.A., Thien, D.: Method of conjugate subgradients with constrained memory. Autom. Remote Control 75(4), 646–656 (2014)
Neimark, J.I.: Perceptron and pattern recognition. In: Mathematical Models in Natural Science and Engineering. Foundations of Engineering Mechanics. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-47878-2_27
Krutikov, V.N., Shkaberina, G.Sh., Zhalnin, M.N., Kazakovtsev, L.A.: New method of training two-layer sigmoidal neural networks with regularization. In: 2019 International Conference on Information Technologies (InfoTech), Varna, pp. 1–4 (2019). https://doi.org/10.1109/InfoTech.2019.8860890
Amini, S., Ghaernmaghami, S.: Sparse autoencoders using non-smooth regularization. In: 26th European Signal Processing Conference (EUSIPCO), Rome, pp. 2000–2004 (2018). https://doi.org/10.23919/EUSIPCO.2018.8553217
Tibshirani, R.J.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B (Methodol.) 58(1), 267–288 (1996)
Acknowledgment
This study was supported by the Ministry of Science and Higher Education of the Russian Federation (Project FEFE-2020-0013).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Krutikov, V.N., Meshechkin, V.V., Kagan, E.S., Kazakovtsev, L.A. (2021). Machine Learning Algorithms of Relaxation Subgradient Method with Space Extension. In: Pardalos, P., Khachay, M., Kazakov, A. (eds) Mathematical Optimization Theory and Operations Research. MOTOR 2021. Lecture Notes in Computer Science(), vol 12755. Springer, Cham. https://doi.org/10.1007/978-3-030-77876-7_32
Download citation
DOI: https://doi.org/10.1007/978-3-030-77876-7_32
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-77875-0
Online ISBN: 978-3-030-77876-7
eBook Packages: Computer ScienceComputer Science (R0)