Abstract:
The rise of the Internet of Health Things (IoHTs) has resulted in a significant increase in collaborative initiatives among healthcare organizations employing federated l...Show MoreMetadata
Abstract:
The rise of the Internet of Health Things (IoHTs) has resulted in a significant increase in collaborative initiatives among healthcare organizations employing federated learning (FL). Even though FL trains models locally to protect privacy, exchanging model parameters still creates privacy risks, especially when working with non-Euclidean data like graphs. To address this issue, differential privacy (DP) is widely used, however, choosing appropriate privacy parameters remains difficult. Therefore, this research employs Rényi Differential Privacy (RDP) analysis, which extends the capabilities of traditional DP by providing more flexibility in the selection of privacy parameters. To measure, this research first models the malware dataset as a function call graph (FCG). Subsequently, the DP-SGD-enabled DotGAT model is utilized to classify both malware and benign applications, ensuring the preservation of privacy while maintaining model utility. Finally, We empirically demonstrate that selecting Rényi divergence (a) values between 2 and 2.5 optimises the balance between privacy and utility in graph-based models within the FL setup, improving healthcare collaboration privacy.
Date of Conference: 07-08 December 2023
Date Added to IEEE Xplore: 20 March 2024
ISBN Information: