skip to main content
10.1145/3508352.3549403acmconferencesArticle/Chapter ViewAbstractPublication PagesiccadConference Proceedingsconference-collections
research-article

Personalized Heterogeneity-Aware Federated Search Towards Better Accuracy and Energy Efficiency

Published:22 December 2022Publication History

ABSTRACT

Federated learning (FL), a new distributed technology, allows us to train the global model on the edge and embedded devices without local data sharing. However, due to the wide distribution of different types of devices, FL faces severe heterogeneity issues. The accuracy and efficiency of FL deployment at the edge are severely impacted by heterogeneous data and heterogeneous systems. In this paper, we perform joint FL model personalization for heterogeneous systems and heterogeneous data to address the challenges posed by heterogeneities. We begin by using model inference efficiency as a starting point to personalize network scale on each node. Furthermore, it can be used to guide the efficient FL training process, which can help to ease the problem of straggler devices and improve FL's energy efficiency. During FL training, federated search is then used to acquire highly accurate personalized network structures. By taking into account the unique characteristics of FL deployment at edge devices, the personalized network structures obtained by our federated search framework with a lightweight search controller can achieve competitive accuracy with state-of-the-art (SOTA) methods, while reducing inference and training energy consumption by up to 3.57× and 1.82×, respectively.

References

  1. K. Bonawitz, H. Eichner, W. Grieskamp, D. Huba, A. Ingerman, V. Ivanov, C. Kiddon, J. Konecny, S. Mazzocchi, H. B. McMahan, T. V. Overveldt, D. Petrou, D. Ramage, and J. Roselander. 2019. Towards federated learning at scale: System design. In SysML.Google ScholarGoogle Scholar
  2. Y. H. Chen, T. J. Yang, J. S. Emer, and V. Sze. 2019. Eyeriss v2 : A Flexible Accelerator for Emerging Deep Neural Networks on Mobile Devices. IEEE Journal on Emerging and Selected Topics in Circuits and Systems 9, 2 (2019), 292--308.Google ScholarGoogle ScholarCross RefCross Ref
  3. R. C. Geyer, T. Klein, and M. Nabi. 2017. Differentially private federated learning: A client level perspective. In arXiv:1712.07557.Google ScholarGoogle Scholar
  4. A. Hard, K. Rao, R. Mathews, S. Ramaswamy, F. Beaufays, S. Augenstein, H. Eichner, C. Kiddon, and D. Ramage. 2018. Federated learning for mobile keyboard prediction. In arXiv:1811.03604.Google ScholarGoogle Scholar
  5. C. He, M. Annavaram, and S. Avestimehr. 2020. Towards Non-I.I.D. and Invisible Data with FedNAS: Federated Deep Learning via Neural Architecture Search. In arXiv:2004.08546.Google ScholarGoogle Scholar
  6. K. Hegde, J. Yu, R. Agrawal, M. Yan, M. Pellauer, and C. W. Fletcher. 2018. Ucnn: Exploiting computational reuse in deep neural networks via weight repetition. In 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture. 674--687.Google ScholarGoogle Scholar
  7. H. Kwon, A. Samajdar, and T. Krishna. 2018. MAERI: Enabling Flexible Dataflow Mapping over DNN Accelerators via Programmable Interconnects. In Proceedings of the International Conference on Architectural Support for Programming Languages and Operation Systems.Google ScholarGoogle Scholar
  8. A. Li, J. Sun, B. Wang, L. Duan, S. Li, Y. Chen, and H. Li. 2021. LotteryFL: Personalized and Communication-Efficient Federated Learning with Lottery Ticket Hypothesis on Non-IID Datasets. In 2021 ACM/IEEE 6th Symposium on Edge Computing (SEC).Google ScholarGoogle Scholar
  9. H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf. 2017. Pruning filters for efficient convnets. In 5th International Conference on Learning Representations.Google ScholarGoogle Scholar
  10. P. P. Liang, T. Liu, Z. Liu, N. B. Allen, R. P. Auerbach, D. Brent, R. Salakhutdinov, and L. P. Morency. 2020. Think locally, act globally: Federated learning with local and global representations. In arXiv:2001.01523.Google ScholarGoogle Scholar
  11. J. Luo, J. Yang, X. Ye, X. Guo, and W. Zhao. 2021. FedSkel: Efficient Federated Learning on Heterogeneous Systems with Skeleton Gradients Update. In Proceedings of the 30th ACM International Conference on Information Knowledge Management. 3283--3287.Google ScholarGoogle Scholar
  12. Y. Ma, Y. Cao, S. Vrudhula, and J. Seo. 2017. Optimizing Loop Operation and Dataflow in FPGA Acceleration of Deep Convolutional Neural Networks. In Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. 45--54.Google ScholarGoogle Scholar
  13. H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics. 1273--1282.Google ScholarGoogle Scholar
  14. Monsoon. [n. d.]. High voltage power monitor. https://www.msoon.com/high-voltage-power-monitorGoogle ScholarGoogle Scholar
  15. E. Real, S. Moore, A. Selle, S. Saxena, Y. L. Suematsu, J. Tan, Q. V. Le, and A. Kurakin. 2014. Large-scale evolution of image classifiers. In Proceedings of the 34th International Conference on Machine Learning, Vol. 70. 2902--2911.Google ScholarGoogle Scholar
  16. J. Weng, S. Liu, Z. Wang, V. Dadu, and T. Nowatzki. 2020. A Hybrid Systolic-Dataflow Architecture for Inductive Matrix Algorithms. In 2020 IEEE International Symposium on High Performance Computer Architecture. 703--716.Google ScholarGoogle Scholar
  17. M. Xu, Y. Zhao, K. Bian, G. Huang, Q. Mei, and X. Liu. 2020. Federated Neural Architecture Search. In arXiv:2002.06352.Google ScholarGoogle Scholar
  18. Y. Jin Y. Chen, X. Sun. 2019. Communication-efficient federated deep learning with layerwise asynchronous model update and temporally weighted aggregation. IEEE transactions on neural networks and learning systems 31, 10 (2019), 4229--4238.Google ScholarGoogle ScholarCross RefCross Ref
  19. F. Yu, W. Zhang, Z. Qin, Z. Xu, D. Wang, C. Liu, Z. Tian, and X. Chen. 2021. Fed2: Feature-Aligned Federated Learning. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery Data Mining. 2066--2074.Google ScholarGoogle Scholar
  20. B. Zoph and Q. V. Le. 2017. Neural architecture search with reinforcement learning. In 5th International Conference on Learning Representations.Google ScholarGoogle Scholar
  21. B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le. 2018. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 8697--8710.Google ScholarGoogle Scholar

Index Terms

  1. Personalized Heterogeneity-Aware Federated Search Towards Better Accuracy and Energy Efficiency
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            ICCAD '22: Proceedings of the 41st IEEE/ACM International Conference on Computer-Aided Design
            October 2022
            1467 pages
            ISBN:9781450392174
            DOI:10.1145/3508352

            Copyright © 2022 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 22 December 2022

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article

            Acceptance Rates

            Overall Acceptance Rate457of1,762submissions,26%

            Upcoming Conference

            ICCAD '24
            IEEE/ACM International Conference on Computer-Aided Design
            October 27 - 31, 2024
            New York , NY , USA
          • Article Metrics

            • Downloads (Last 12 months)44
            • Downloads (Last 6 weeks)2

            Other Metrics

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader