skip to main content
introduction
Free Access

Preface to Federated Learning: Algorithms, Systems, and Applications: Part 2

Published:24 August 2022Publication History

We are delighted to present this special issue on Federated Learning: Algorithms, Systems, and Applications. Federated learning (FL) enables us to collaboratively learn a shared learning framework while distributing the data to clients instead of centralized storage. It allows for governments and businesses to design lower-latency and less-power-consuming models while ensuring data privacy, which is crucial for the development of systems and applications such as healthcare systems, the Internet of Vehicles (IoV), and smart cities. Since stricter regulations on privacy and security exacerbate the data fragmentation and isolation problem, where data holders are unwilling or prohibited to share their raw data freely, emerging frameworks based on federated learning are required to solve the above problems.

The purpose of this special issue is to provide a forum for researchers and practitioners to present their latest research findings and engineering experiences in the theoretical foundations, empirical studies, and novel applications of federated learning for next-generation intelligent systems. This special issue consists of two parts. In Part 2, the guest editors selected 11 contributions that cover varying topics within this theme, ranging from privacy-aware IoV service deployment with federated learning in cloud-edge computing to federated multi-task graph learning. From this part, you can get the latest progress of federated learning, which may provide a new direction for your research. You can also learn the basic ideas and methods of federated learning and find inspiration from their research ideas on problems. In addition, the frame structure and diagram configuration of their articles may provide a template for your relevant papers.

Xu et al., in “ PSDF: Privacy-Aware IoV Service Deployment with Federated Learning in Cloud-Edge Computing,” propose a method for privacy-aware IoV service deployment with federated learning in cloud-edge computing that copes with the dynamical service deployment problem for IoV in cloud-edge computing while protecting the privacy of edge servers.

Zhong et al., in “ FLEE: A Hierarchical Federated Learning Framework for Distributed Deep Neural Network over Cloud, Edge and End Device,” comprehensively consider various data distributions on end devices and edges, proposing a hierarchical federated learning framework, FLEE, which can realize dynamical updates of models without redeploying them.

Dang et al., in “ Federated Learning for Electronic Health Records,” survey existing works on FL applications in electronic health records (EHRs) and evaluate the performance of current state-of-the-art FL algorithms on two EHR machine learning tasks of significant clinical importance on a real-world multi-center EHR dataset.

Li et al., in “ Auto-weighted Robust Federated Learning with Corrupted Data Sources,” propose Auto-weighted Robust Federated Learning, a novel approach that automatically re-weights the local updates to lower the contribution of corrupted clients who provide low-quality updates to the global model.

Jiang et al., in “ SignDS-FL: Local Differentially Private Federated Learning with Sign-based Dimension Selection,” propose an efficient and privacy-preserving federated learning framework based on local differentially private dimension selection, SignDS-FL, which saves the privacy cost for the value perturbation stage by assigning random sign values to the selected dimensions.

Zeng et al., in “ CLC: A Consensus-based Label Correction Approach in Federated Learning,” propose a consensus-based label correction approach in FL that tries to correct the noisy labels using the developed consensus method among the FL participants. The consensus-defined classwise information is used to identify the noisy labels and correct them with pseudo-labels.

Chen et al., in “ Defending against Poisoning Backdoor Attacks on Federated Meta-Learning,” propose a defense mechanism inspired by matching networks where the class of an input is predicted from the similarity of its features with a support set of labeled examples. By removing the decision logic from the model shared with the federation, success and persistence of backdoor attacks are greatly reduced.

Xie et al., in “ An Efficient Learning Framework for Federated XGBoost Using Secret Sharing and Distributed Optimization,” propose a lossless multi-party federated XGB learning framework with a security guarantee that reshapes the XGBoost’s split criterion calculation process under a secret sharing setting and solves the leaf weight calculation problem by leveraging distributed optimization.

Stripelis et al., in “ Semi-Synchronous Federated Learning for Energy-Efficient Training and Accelerated Convergence in Cross-Silo Settings,” introduce a novel energy-efficient semi-synchronous federated learning protocol that mixes local models periodically with minimal idle time and fast convergence.

Damaskinos et al., in “ FLeet: Online Federated Learning via Staleness Awareness and Performance Prediction,” present FLeet, an online federated learning system acting as a middleware between the Android OS and the machine learning application. It is also the first system that enables online machine learning at the edge.

Liu et al., in “ Federated Multi-task Graph Learning,” propose a federated multi-task graph learning framework that considers both scalability and data privacy and learns multiple analysis tasks from decentralized graph data to solve the problem within a privacy-preserving and scalable scheme. Its core is an innovative data-fusion mechanism and a low-latency distributed optimization method.

The guest editors believe the articles in this issue represent the frontiers of current topics in the field of federated learning and hope these articles will stimulate further development in this area. The editors express sincere appreciation to the authors and reviewers for their tremendous contributions to this special issue.

We hope you enjoy this special issue and take some inspiration from it for your own future research.

Qiang YangYongxin TongYang LiuYangqiu SongHao PengBoi FaltingsGuest Editors

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in

Full Access

  • Published in

    cover image ACM Transactions on Intelligent Systems and Technology
    ACM Transactions on Intelligent Systems and Technology  Volume 13, Issue 5
    October 2022
    424 pages
    ISSN:2157-6904
    EISSN:2157-6912
    DOI:10.1145/3542930
    • Editor:
    • Huan Liu
    Issue’s Table of Contents

    Copyright © 2022 Copyright held by the owner/author(s).

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 24 August 2022
    Published in tist Volume 13, Issue 5

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • introduction
    • Refereed
  • Article Metrics

    • Downloads (Last 12 months)61
    • Downloads (Last 6 weeks)25

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format