Abstract
We study nonsmooth convex stochastic optimization problems with a two-point zero-order oracle, i.e., at each iteration one can observe the values of the function’s realization at two selected points. These problems are first smoothed out with the well-known technique of double smoothing (B.T. Polyak) and then solved with the stochastic mirror descent method. We obtain conditions for the permissible noise level of a nonrandom nature exhibited in the computation of the function’s realization for which the estimate on the method’s rate of convergence is preserved.
Similar content being viewed by others
References
Polyak, B.T., Vvedenie v optimizatsiyu (Introduction to Optimization), Moscow: Nauka, 1983.
Granichin, O.N. and Polyak, B.T., Randomizirovannye algoritmy otsenivaniya i optimizatsii pri pochti proizvol’nykh pomekhakh (Randomized Estimation and Optimization Algorithms for Almost Arbitrary Noise), Moscow: Nauka, 2003.
Duchi, J.C., Jordan, M.I., Wainwright, M.J., and Wibisono, A., Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations, IEEE Transact. Inf., 2015, vol. 61, no. 5, pp. 2788–2806.
Shamir, O., An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback, e-print, 2015. https://doi.org/arxiv.org/pdf/1507.08752v1.pdf
Gasnikov, A.V., Dvurechenskii, P.E., and Nesterov, Yu.E., Stochastic Gradient Methods with Inexact Oracle, Proc. MFTI, 2016, vol. 8, no. 1, pp. 41–91. https://doi.org/arxiv.org/ftp/arxiv/papers/1411/1411.4218.pdf
Gasnikov, A.V., Lagunovskaya, A.A., Usmanova, I.N., et al., Gradient-Free Proximal Methods with Inexact Oracle for Convex Stochastic Nonsmooth Optimization Problems on the Simplex, Autom. Remote Control, 2016, vol. 77, no. 11, pp. 2018–2034.
Gasnikov, A.V., Krymova, E.A., Lagunovskaya, A.A., et al., Stochastic Online Optimization. Single-Point and multi-Point Non-Linear Multi-Armed Bandits. Convex and Strongly-Convex Case, Autom. Remote Control, 2017, vol. 78, no. 2, pp. 224–234.
Nemirovskii, A.S. and Yudin, D.B., Slozhnost’ zadach i effektivnost’ metodov optimizatsii (Convexity of Problems and Efficiency of Optimization Methods), Moscow: Nauka, 1979.
Agarwal, A., Dekel, O., and Xiao, L., Optimal Algorithms for Online Convex Optimization with Multi-Point Bandit Feedback, Proc. 23 Annual Conf. on Learning Theory, 2010, pp. 28–40.
Bubeck, S. and Cesa-Bianchi, N., Regret Analysis of Stochastic and Nonstochastic Multi-Armed Bandit Problems, Found. Trends Machine Learning, 2012, vol. 5, no. 1, pp. 1–122.
Nemirovski, A., Lectures on Modern Convex Optimization Analysis, Algorithms, and Engineering Applications, Philadelphia: SIAM, 2013. https://doi.org/www2.isye.gatech.edu/~nemirovs/LectModConvOpt.pdf
Shapiro, A., Dentcheva, D., and Ruszczynski, A., Lecture on Stochastic Programming. Modeling and Theory, MPS-SIAM Series on Optimization, 2014.
Nesterov, Yu., Primal-Dual Subgradient Methods for Convex Problems, Math. Program., Ser. B, 2009, vol. 120(1), pp. 261–283.
Duchi, J.C., Introduction Lectures in Stochastic Programming, Park City Math. Ser., 2016. https://doi.org/stanford.edu/~jduchi/PCMIConvex/Duchi16.pdf
Juditsky, A. and Nesterov, Yu., Deterministic and Stochastic Primal-Dual Subgradient Algorithms for Uniformly Convex Minimization, Stoch. Syst., 2014, vol. 4, no. 1, pp. 44–80.
Hazan, E. and Kale, S., Beyond the Regret Minimization Barrier: Optimal Algorithms for Stochastic Strongly-Convex Optimization, JMLR, 2014, vol. 15, pp. 2489–2512.
Guiges, V., Juditsky, A., and Nemirovski, A., Non-Asymptotic Confidence Bounds for the Optimal Value of a Stochastic Program, e-print, 2016. https://doi.org/arxiv.org/pdf/1601.07592.pdf
Ball, K., An Elementary Introduction to Modern Convex Geometry, in Flavors of Geometry, Levy, S., Ed., Cambridge: Cambridge Univ. Press, 1997, pp. 1–58 (Math Sci. Res. Inst. Publ., vol. 31).
Usmanova, I.N., Gradient-Free Mirror Descent Method with Two-Point Noisy Oracle, B.Sc. Thesis, Applied Mathematics and Physics, Dolgoprudny, MIPT, 2015.
Evans, L.C. and Gariepy, R.F., Measure Theory and Fine Properties of Functions, Boca Raton: CRC Press, 1992. Translated under the title Teoriya mery i tonkie svoistva funktsii, Novosibirsk: Nauchnaya Kniga (IDMI), 2002.
Author information
Authors and Affiliations
Corresponding author
Additional information
Original Russian Text © A.S. Bayandina, A.V. Gasnikov, A.A. Lagunovskaya, 2018, published in Avtomatika i Telemekhanika, 2018, No. 8, pp. 38–49.
Rights and permissions
About this article
Cite this article
Bayandina, A.S., Gasnikov, A.V. & Lagunovskaya, A.A. Gradient-Free Two-Point Methods for Solving Stochastic Nonsmooth Convex Optimization Problems with Small Non-Random Noises. Autom Remote Control 79, 1399–1408 (2018). https://doi.org/10.1134/S0005117918080039
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1134/S0005117918080039