skip to main content
10.1145/3318299.3318340acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicmlcConference Proceedingsconference-collections
research-article

Distributed Machine Learning over Directed Network with Fixed Communication Delays

Authors Info & Claims
Published:22 February 2019Publication History

ABSTRACT

In this paper, we present a distributed machine learning algorithm over a network with fixed-delay tolerance. The network is directed and strongly connected. The training dataset is distributed to all agents in the network. We combine the distributed convex optimization (which utilizes double linear iterations) and corresponding machine learning algorithm. Each agent can only access its own local dataset. Suppose the delay between any pair of agents is time-invariant. The simulation shows that our algorithm is able to work under delayed transmission, in the sense that over time at each agent t the ratio of the estimate value xi(t) and scaling variable yi(t) can converge to the optimal point of the global cost function corresponding to the machine learning problem.

References

  1. Suyog Gupta, Wei Zhang, and Fei Wang. Model accuracy and runtime tradeoff in distributed deep learning: A systematic study. In Data Mining (ICDM), 2016 IEEE 16th International Conference on, pages 171--180. IEEE, 2016.Google ScholarGoogle ScholarCross RefCross Ref
  2. Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior, Paul Tucker, Ke Yang, Quoc V Le, et al. Large scale distributed deep networks. In Advances in neural information processing systems, pages 1223--1231, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Sixin Zhang, Anna E Choromanska, and Yann LeCun. Deep learning with elastic averaging sgd. In Advances in Neural Information Processing Systems, pages 685--693, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. H Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, et al. Communication-efficient learning of deep networks from decentralized data. arXiv preprint arXiv:1602.05629, 2016.Google ScholarGoogle Scholar
  5. Zhanhong Jiang, Aditya Balu, Chinmay Hegde, and Soumik Sarkar. Collaborative deep learning in fixed topology networks. In Advances in Neural Information Processing Systems, pages 5904--5914, 2017. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Lin Xiao and Stephen Boyd. Fast linear iterations for distributed averaging. Systems & Control Letters, 53(1):65--78, 2004.Google ScholarGoogle ScholarCross RefCross Ref
  7. David Kempe, Alin Dobra, and Johannes Gehrke. Gossip-based computation of aggregate information. In Foundations of Computer Science, 2003. Proceedings. 44th Annual IEEE Symposium on, pages 482--491. IEEE, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Ji Liu and A Stephen Morse. Asynchronous distributed averaging using double linear iterations. In American Control Conference (ACC), 2012, pages 6620--6625. IEEE, 2012.Google ScholarGoogle ScholarCross RefCross Ref
  9. Christoforos N Hadjicostis and Themistoklis Charalambous. Average consensus in the presence of delays in directed graph topologies. IEEE Transactions on Automatic Control, 59(3):763--768, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  10. Angelia Nedic and Asuman Ozdaglar. Distributed subgradient methods for multi-agent optimization. IEEE Transactions on Automatic Control, 54(1):48--61, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  11. Kun Yuan, Qing Ling, and Wotao Yin. On the convergence of decentralized gradient descent. SIAM Journal on Optimization, 26(3):1835--1854, 2016.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Angelia Nedić and Alex Olshevsky. Distributed optimization over time-varying directed graphs. IEEE Transactions on Automatic Control, 60(3):601--615, 2015.Google ScholarGoogle ScholarCross RefCross Ref
  13. https://www.kaggle.com/andonians/random-linear-regression.Google ScholarGoogle Scholar

Index Terms

  1. Distributed Machine Learning over Directed Network with Fixed Communication Delays

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        ICMLC '19: Proceedings of the 2019 11th International Conference on Machine Learning and Computing
        February 2019
        563 pages
        ISBN:9781450366007
        DOI:10.1145/3318299

        Copyright © 2019 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 22 February 2019

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited
      • Article Metrics

        • Downloads (Last 12 months)10
        • Downloads (Last 6 weeks)1

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader