Abstract:
The evolution from fifth-generation mobile communications (5G) to beyond 5G (B5G) will lead to more ubiquitous and smarter paradigms in the Internet of Things (IoT). Comm...Show MoreMetadata
Abstract:
The evolution from fifth-generation mobile communications (5G) to beyond 5G (B5G) will lead to more ubiquitous and smarter paradigms in the Internet of Things (IoT). Communication will shift from the classical information theory to a semantic communication (SemCom) paradigm driven by artificial intelligence (AI) to enhance the capacity and optimize resources. Image SemCom (ISC) will empower IoT applications, such as drone image acquisition. However, ISC requires suitable devices and sufficient computing resources to support complex neural network models, posing a significant challenge. To address this, we propose a federated semantic feature distillation (FedSFD) architecture to improve the global performance of ISC by combining federated learning (FL) and feature distillation (FD) for the feature knowledge transfer. First, lightweight IoT device models in the edge group and the powerful server model alternately minimize losses to update parameters. After learning middle-layer features of all the edge models, the server can guide the individual device models. Second, we incorporate the information bottleneck (IB) concept into the design of loss functions to balance compression and reconstruction. Third, focusing on the tradeoff between the local training and knowledge interaction, FedSFD achieves image semantic reconstruction without sharing the private data, ensuring personalization and privacy protection in the FL framework. Finally, compared to the baseline, the simulation experiments show that the proposed approach achieves better ISC reconstruction and noise robustness during group ISC.
Published in: IEEE Internet of Things Journal ( Volume: 11, Issue: 21, 01 November 2024)