Skip to main content
Log in

A decentralized smoothing quadratic regularization algorithm for composite consensus optimization with non-Lipschitz singularities

  • Original Paper
  • Published:
Numerical Algorithms Aims and scope Submit manuscript

Abstract

Distributed algorithms are receiving renewed attention across multiple disciplines due to the dramatically increasing demand of big data processing. We consider a class of consensus optimization problems over a static network system of multiple agents, where each of local cost functions is a sum of a smooth function and a non-Lipschitz regularization term. This kind of problems is widely found in scientific and engineering areas such as machine learning and data analysis. Inspired by Bian and Chen (SIAM J. Opt. 23(3), 1718–1741 2017), we propose a decentralized smoothing quadratic regularization algorithm (abbreviated as D-SQRA) for solving the composite consensus problem with non-Lipschitz singularities. To some extent, D-SQRA can be seen as an extension in the decentralized setup of the smoothing quadratic regularization algorithm (SQRA) proposed in (SIAM J. Opt. 23(3), 1718–1741 2017). Our main contribution is to show that D-SQRA can inherit the theoretical properties of its centralized counterpart, i.e., SQRA, in both sides of convergence and worst-case iteration complexity to achieve an \(\epsilon \) scaled stationary point. We also present some numerical examples on sparse sensing problems based on synthetic and real datasets to corroborate the effectiveness of the proposed decentralized algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Algorithm 1
Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Availability of data and materials

Not applicable

References

  1. Bian, W., Chen, X.: Worst-case complexity of smoothing quadratic regularization methods for non-Lipschitzian optimization. SIAM J. Optim. 23(3), 1718–1741 (2013)

    Article  MathSciNet  Google Scholar 

  2. Boyd, S., Diaconis, P., Xiao, L.: Fastest mixing Markov chain on a graph. SIAM Review 46(4), 667–689 (2004)

    Article  MathSciNet  Google Scholar 

  3. Cartis, C., Gould, N.I., Toint, P.L.: On the evaluation complexity of composite function minimization with applications to nonconvex nonlinear programming. SIAM J. Optim. 21(4), 1721–1739 (2011)

    Article  MathSciNet  Google Scholar 

  4. Cartis, C., Gould, N.I.M., Toint, P.L.: Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results. Math Program. 127(2), 245–295 (2011)

    Article  MathSciNet  Google Scholar 

  5. Cartis, C., Gould, N.I.M., Toint, P.L.: Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function- and derivative-evaluation complexity. Math. Program. 130(2), 295–319 (2011)

    Article  MathSciNet  Google Scholar 

  6. Chang, T.-H., Hong, M., Wang, X.: Multi-agent distributed optimization via inexact consensus ADMM. IEEE Transactions on Signal Processing 63(2), 482–497 (2014)

    Article  MathSciNet  Google Scholar 

  7. Chen, A.I.-A.: Fast distributed first-order methods. PhD thesis, Massachusetts Institute of Technology (2012)

  8. Chen, X.: Smoothing methods for nonsmooth, nonconvex minimization. Math. Program. 134(1), 71–99 (2012)

    Article  MathSciNet  Google Scholar 

  9. Chen, X., Ge, D., Wang, Z., Ye, Y.: Complexity of unconstrained \(l_2\)-\(l_p\) minimization. Math. Program. 143(1–2), 371–383 (2014)

    Article  MathSciNet  Google Scholar 

  10. Chen, X., Xu, F., Ye, Y.: Lower bound theory of nonzero entries in solutions of \(\ell _2\)-\(\ell _p\) minimization. SIAM J. Sci. Comput. 32(5), 2832–2852 (2010)

    Article  MathSciNet  Google Scholar 

  11. Clarke, F.H.: Optimization and nonsmooth analysis. SIAM (1990)

  12. Fan, J.: Comments on ‘Wavelets in statistics: a review’ by A. Antoniadis. Journal of the Italian Statistical Society 6(2), 131 (1997)

    Article  Google Scholar 

  13. Hong, M., Hajinezhad, D., Zhao, M.-M.: Prox-PDA: the proximal primal-dual algorithm for fast distributed nonconvex optimization and learning over networks. In International Conference on Machine Learning, pp 1529–1538, (2017)

  14. Hong, M., Zeng, S., Zhang, J., Sun, H.: On the divergence of decentralized nonconvex optimization. SIAM J. Optim. 32(4), 2879–2908 (2022)

    Article  MathSciNet  Google Scholar 

  15. Jakovetić, D., Xavier, J., Moura, J.M.: Fast distributed gradient methods. IEEE Transactions on Automatic Control 59(5), 1131–1146 (2014)

    Article  MathSciNet  Google Scholar 

  16. Lee, S., Nedic, A.: Distributed random projection algorithm for convex optimization. IEEE Journal of Selected Topics in Signal Processing 7(2), 221–229 (2013)

    Article  Google Scholar 

  17. Lian, X., Zhang, C., Zhang, H., Hsieh, C.-J., Zhang, W., Liu, J.: Can decentralized algorithms outperform centralized algorithms? A case study for decentralized parallel stochastic gradient descent. In Advances in Neural Information Processing Systems, pp. 5330–5340 (2017)

  18. Ling, Q., Shi, W., Wu, G., Ribeiro, A.: DLM: decentralized linearized alternating direction method of multipliers. IEEE Transactions on Signal Processing 63(15), 4051–4064 (2015)

    Article  MathSciNet  Google Scholar 

  19. Matei, I., Baras, J.S.: Performance evaluation of the consensus-based distributed subgradient method under random communication topologies. IEEE Journal of Selected Topics in Signal Processing 5(4), 754–771 (2011)

    Article  Google Scholar 

  20. Mateos, G., Bazerque, J.A., Giannakis, G.B.: Distributed sparse linear regression. IEEE Transactions on Signal Processing 58(10), 5262–5276 (2010)

    Article  MathSciNet  Google Scholar 

  21. Mokhtari, A., Ling, Q., Ribeiro, A.: Network Newton. In 2014 48th Asilomar Conference on Signals, Systems and Computers, pp 1621–1625 (2014)

  22. Mokhtari, A., Ling, Q., Ribeiro, A.: Network Newton distributed optimization methods. IEEE Transactions on Signal Processing 65(1), 146–161 (2017)

    Article  MathSciNet  Google Scholar 

  23. Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103(1), 127–152 (2005)

    Article  MathSciNet  Google Scholar 

  24. Patterson, S., Eldar, Y.C., Keidar, I.: Distributed compressed sensing for static and time-varying networks. IEEE Transactions on Signal Processing 62(19), 4931–4946 (2014)

    Article  MathSciNet  Google Scholar 

  25. Qu, G., Li, N.: Harnessing smoothness to accelerate distributed optimization. IEEE Transactions on Control of Network Systems 5(3), 1245–1260 (2017)

    Article  MathSciNet  Google Scholar 

  26. Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis, vol 317. Springer Science & Business Media (2009)

  27. Schizas, I.D., Ribeiro, A., Giannakis, G.B.: Consensus in ad hoc WSNs with noisy links-Part i: distributed estimation of deterministic signals. IEEE Transactions on Signal Processing 56(1), 350–364 (2007)

    Article  Google Scholar 

  28. Shi, W., Ling, Q., Wu, G., Yin, W.: EXTRA: an exact first-order algorithm for decentralized consensus optimization. SIAM J. Optim. 25(2), 944–966 (2015)

    Article  MathSciNet  Google Scholar 

  29. Shi, W., Ling, Q., Wu, G., Yin, W.: A proximal gradient algorithm for decentralized composite optimization. IEEE Transactions on Signal Processing 63(22), 6013–6023 (2015)

    Article  MathSciNet  Google Scholar 

  30. Shi, W., Ling, Q., Yuan, K., Wu, G., Yin, W.: On the linear convergence of the ADMM in decentralized consensus optimization. IEEE Transactions on Signal Processing 62(7), 1750–1761 (2014)

    Article  MathSciNet  Google Scholar 

  31. Tsitsiklis, J.: Problems in decentralized decision making and computation. PhD, Massachusetts Institute of Technology (1984)

    Google Scholar 

  32. Tsitsiklis, J., Bertsekas, D., Athans, M.: Distributed asynchronous deterministic and stochastic gradient optimization algorithms. IEEE Trans. Autom. Control. 31(9), 803–812 (1986)

    Article  MathSciNet  Google Scholar 

  33. van den Berg, E., Friedlander, M.P.: SPGL1: a solver for large-scale sparse reconstruction (2019). https://friedlander.io/spgl1

  34. Van Den Berg, E., Friedlander, M.P., Hennenfent, G., Herrmann, F., Saab, R., YÄślmaz, O.: Sparco: a testing framework for sparse reconstruction. Dept. Comput. Sci., Univ. British Columbia, Vancouver, Tech. Rep. TR-2007-20,[Online]. (2007) Available: http://www.cs.ubc.ca/labs/scl/sparco

  35. Wai, H.-T., Chang, T.-H., Scaglione, A.: A consensus-based decentralized algorithm for non-convex optimization with application to dictionary learning. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3546–3550 (2015). ISSN: 2379-190X

  36. Wai, H.-T., Lafond, J., Scaglione, A., Moulines, E.: Decentralized Frank-Wolfe algorithm for convex and nonconvex problems. IEEE Trans. Autom. Control. 62(11), 5522–5537 (2017)

    Article  MathSciNet  Google Scholar 

  37. Yuan, K., Ling, Q., Yin, W.: On the convergence of decentralized gradient descent. SIAM J. Optim. 26(3), 1835–1854 (2016)

    Article  MathSciNet  Google Scholar 

  38. Zeng, J., Yin, W.: On nonconvex decentralized gradient descent. IEEE Transactions on Signal Processing 66(11), 2834–2848 (2018)

    Article  MathSciNet  Google Scholar 

  39. Zhang, C.-H.: Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics 38(2), 894–942 (2010)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported in part by the Natural Science Foundation of Top Talent of SZTU (grant no. GDRC202136) and in part by National Natural Science Foundation of China (grant no. 12201428). We also would like to thank two anonymous referees for their insightful and constructive comments, which helped us to enrich the content and improve the presentation of the results in this paper.

Funding

The research project of the author (Hong Wang) was partially sponsored by the Natural Science Foundation of Top Talent of SZTU with grant number GDRC202136 and partially sponsored by the National Natural Science Foundation of China with grant number 12201428.

Author information

Authors and Affiliations

Authors

Contributions

In this manuscript, the original idea, algorithm development and its theoretical analysis, numerical experiments, and draft preparation of this manuscript, are attributed to the only bylined author.

Corresponding author

Correspondence to Hong Wang.

Ethics declarations

Ethics approval

Not applicable

Conflict of interest

The author would like to undertake that \(\bullet \) The contents of this manuscript have not been copyrighted or published previously; \(\bullet \) The contents of this manuscript are not now under consideration for publication elsewhere; \(\bullet \) The contents of this manuscript will not be copyrighted, submitted, or published elsewhere, while acceptance by the Journal is under consideration; \(\bullet \) There are no directly related manuscript or abstracts, published or unpublished, by the author of this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, H. A decentralized smoothing quadratic regularization algorithm for composite consensus optimization with non-Lipschitz singularities. Numer Algor 96, 369–396 (2024). https://doi.org/10.1007/s11075-023-01650-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11075-023-01650-6

Keywords

Navigation