Skip to main content
Log in

HSAC-ALADMM: an asynchronous lazy ADMM algorithm based on hierarchical sparse allreduce communication

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

The distributed alternating direction method of multipliers (ADMM) is an effective algorithm for solving large-scale optimization problems. However, its high communication cost limits its scalability. An asynchronous lazy ADMM algorithm based on hierarchical sparse allreduce communication mode (HSAC-ALADMM) is proposed to reduce the communication cost of the distributed ADMM: firstly, this paper proposes a lazily aggregate parameters strategy to filter the transmission parameters of the distributed ADMM, which reduces the payload of the node per iteration. Secondly, a hierarchical sparse allreduce communication mode is tailored for sparse data to aggregate the filtered transmission parameters effectively. Finally, a Calculator-Communicator-Manager framework is designed to implement the proposed algorithm, which combines the asynchronous communication protocol and the allreduce communication mode effectively. It separates the calculation and communication by multithreading, thus improving the efficiency of system calculation and communication. Experimental results for the L1-regularized logistic regression problem with public datasets show that the HSAC-ALADMM algorithm is faster than existing asynchronous ADMM algorithms. Compared with existing sparse allreduce algorithms, the hierarchical sparse allreduce algorithm proposed in this paper makes better use of the characteristics of sparse data to reduce system time in multi-core cluster.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html#kdd2010 raw version (bridge to algebra).

  2. https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html#url.

References

  1. Algamal ZY, Lee MH (2019) A two-stage sparse logistic regression for optimal gene selection in high-dimensional microarray data classification. Adv Data Anal Classif 13:753–771

    Article  MathSciNet  Google Scholar 

  2. Alistarh D, Grubic D, Li J, Tomioka R, Vojnovic M (2017) Qsgd: communication-efficient sgd via gradient quantization and encoding. In: Guyon I, Luxburg UV, Bengio S, Wallach H, Fergus R, Vishwanathan S, Garnett R (eds) Advances in neural information processing systems 30. Curran Associates, Inc., pp 1709–1720

  3. Balamurugan P, Posinasetty A, Shevade S (2016) Admm for training sparse structural svms with augmented l1 regularizers. In: Siam International Conference On Data Mining. https://doi.org/10.1137/1.9781611974348.77

  4. Boyd S, Parikh N, Chu E, Peleato B, Eckstein J et al (2011) Distributed optimization and statistical learning via the alternating direction method of multipliers. Found Trends\({\textregistered }\) Mach Learn 3(1):1–122. https://doi.org/10.1561/2200000016

  5. Chang TH, Hong M, Liao WC, Wang X (2016) Asynchronous distributed admm for large-scale optimization-part i: algorithm and convergence analysis. IEEE Trans Signal Process 64(12):3118–3130. https://doi.org/10.1109/TSP.2016.2537271

    Article  MathSciNet  MATH  Google Scholar 

  6. Chiang W, Lee M, Lin C (2016) Parallel dual coordinate descent method for large-scale linear classification in multi-core environments. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp 1485–1494). ACM. https://doi.org/10.1145/2939672.2939826

  7. Cucchiara A (2012) Applied logistic regression. Technometrics 34(3):358–359

    Article  Google Scholar 

  8. Dong J, Cao Z, Zhang T, Ye J, Wang S, Feng F, Zhao L, Liu X, Song L, Peng L et al (2020) Eflops: algorithm and system co-design for a high performance distributed training platform. In: 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA) (pp 610–622). IEEE

  9. Elgabli A, Park J, Bedi AS, Issaid CB, Bennis M, Aggarwal V (2020) Q-gadmm: quantized group admm for communication efficient decentralized machine learning. IEEE Trans Commun. https://doi.org/10.1109/TCOMM.2020.3026398

    Article  MATH  Google Scholar 

  10. Xing EP, Ho Q, Wei D, Xie P (2016) Strategies and principles of distributed machine learning on big data. Engineering 2:179–195

    Article  Google Scholar 

  11. Fang L, Lei Y (2016) An asynchronous distributed admm algorithm and efficient communication model. In: 2nd International Conference on Big Data Intelligence and Computing and Cyber Science and Technology Congress (pp 136–140). IEEE. https://doi.org/10.1109/DASC-PICom-DataCom-CyberSciTec.2016.41

  12. Genkin A, Madigan LD (2007) Large-scale Bayesian logistic regression for text categorization. Technometrics 49(3):291–304

    Article  MathSciNet  Google Scholar 

  13. Hasanov K, Lastovetsky A (2017) Hierarchical redesign of classic mpi reduction algorithms. J Supercomput 73(2):713–725. https://doi.org/10.1007/s11227-016-1779-7

    Article  Google Scholar 

  14. Hong M (2018) A distributed, asynchronous and incremental algorithm for nonconvex optimization: an ADMM based approach. IEEE Trans Control Netw Syst 5(3):935–945. https://doi.org/10.1109/TCNS.2017.2657460

    Article  MathSciNet  MATH  Google Scholar 

  15. Lei D, Du M, Chen H, Li Z, Wu Y (2019) Distributed parallel sparse multinomial logistic regression. IEEE Access 7:55496–55508. https://doi.org/10.1109/ACCESS.2019.2913280

    Article  Google Scholar 

  16. Li Y, Wang X, Fang W, Xue F, Li X (2019) A distributed admm approach for collaborative regression learning in edge computing. Comput Mater Continua 58(2):493–508

    Article  Google Scholar 

  17. Lin C, Weng RC, Keerthi SS (2007) Trust region Newton method for large scale logistic regression. In: The 24th International Conference on Machine Learning (vol 358, pp 561–568). https://doi.org/10.1145/1273496.1273567

  18. Lin Y, Han S, Mao H, Wang Y, Dally B (2018) Deep gradient compression: reducing the communication bandwidth for distributed training. In: In ICLR 2018 International Conference on Learning Representations

  19. Liu J, Chen J, Ye J (2009) Large-scale sparse logistic regression. In: In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery And Data Mining (pp 547–556). ACM. https://doi.org/10.1145/1557019.1557082

  20. Liu Y, Yuan K, Wu G, Tian Z, Ling Q (2019) Decentralized dynamic admm with quantized and censored communications. In: 2019 53rd Asilomar Conference on Signals, Systems, and Computers. IEEE

  21. Nguyen TT, Wahib M, Takano R (2019) Topology-aware sparse allreduce for large-scale deep learning. In: 2019 IEEE 38th International Performance Computing and Communications Conference (IPCCC) (pp 1–8). IEEE

  22. Renggli C, Ashkboos S, Aghagolzadeh M, Alistarh D, Hoefler T (2019) Sparcml: high-performance sparse communication for machine learning. In: International Conference for High Performance Computing, Networking, Storage and Analysis. https://doi.org/10.1145/3295500.3356222

  23. Richtarik P, Takac M (2016) Distributed coordinate descent method for learning with big data. J Mach Learn Res 17(1):2657–2681

    MathSciNet  MATH  Google Scholar 

  24. Safdar S, Zafar S, Zafar N, Khan NF (2018) Machine learning based decision support systems (dss) for heart disease diagnosis: a review. Artif Intell Rev 50(4):597–623. https://doi.org/10.1007/s10462-017-9552-8

    Article  Google Scholar 

  25. Shevade SK, Keerthi SS (2003) A simple and efficient algorithm for gene selection using sparse logistic regression. Bioinformatics 19(17):2246–2253

    Article  Google Scholar 

  26. Thakur R, Rabenseifner R, Gropp W (2005) Optimization of collective communication operations in mpich. Int J High Perform Comput Appl 19(1):49–66. https://doi.org/10.1177/1094342005051521

    Article  Google Scholar 

  27. Wang H, Sievert S, Liu S, Charles Z, Papailiopoulos D, Wright S (2018) Atomo: communication-efficient learning via atomic sparsification. In: Bengio S, Wallach H, Larochelle H, Grauman K, Cesa-Bianchi N, Garnett R (eds) Advances in neural information processing systems 31. Curran Associates, Inc., pp 9850–9861

  28. Wang S, Lei Y (2018) Fast communication structure for asynchronous distributed admm under unbalance process arrival pattern. In: 27th International Conference on Artificial Neural Networks. Springer

  29. Xie J, Lei Y (2019) Admmlib: a library of communication-efficient ad-admm for distributed machine learning. In: IFIP International Conference on Network and Parallel Computing. Springer. https://doi.org/10.1109/TCNS.2017.2657460

  30. Zhang R, Kwok J (2014) Asynchronous distributed admm for consensus optimization. In: Proceedings of the 31th International Conference on Machine Learning (pp 1701–1709). jmlr. http://respository.ust.hk/ir/Record/1783.1-66353

  31. Zhang X, Mahadevan S (2019) Ensemble machine learning models for aviation incident risk prediction. Decis Support Syst 116:48–63

    Article  Google Scholar 

  32. Zhao H, Canny J (2014) Kylix: a sparse allreduce for commodity clusters. In: 43rd International Conference on Parallel Processing (pp 273–282). IEEE. https://doi.org/10.1109/ICPP.2014.36

Download references

Acknowledgements

Thanks to the High Performance Computing Center of Shanghai University for providing computing resources and technical support. Funding was provided by the National Natural Science Foundation of China–Basic Algorithms and Programming Environment of Big Data Analysis Based on Supercomputing (Grant No. U1811461).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yongmei Lei.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work is supported by the National Natural Science Foundation of China under Grant No. U1811461.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, D., Lei, Y., Xie, J. et al. HSAC-ALADMM: an asynchronous lazy ADMM algorithm based on hierarchical sparse allreduce communication. J Supercomput 77, 8111–8134 (2021). https://doi.org/10.1007/s11227-020-03590-7

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-020-03590-7

Keywords

Mathematics Subject Classification

Navigation