Abstract:
Differential privacy (DP) has been proven to be an effective universal solution for privacy protection in language models. Nevertheless, the introduction of DP incurs sig...Show MoreMetadata
Abstract:
Differential privacy (DP) has been proven to be an effective universal solution for privacy protection in language models. Nevertheless, the introduction of DP incurs significant computational overhead. One promising approach to this challenge is to integrate Parameter Efficient Fine-Tuning (PEFT) with DP, leveraging the memory-efficient characteristics of PEFT to reduce the substantial memory consumption of DP. Given that fine-tuning aims to quickly adapt pretrained models to downstream tasks, it is crucial to balance privacy protection with model utility to avoid excessive performance compromise. In this paper, we propose a Relevance-based Adaptive Private Fine-Tuning (Rap-FT) framework, the first approach designed to mitigate model utility loss caused by DP perturbations in the PEFT context, and to achieve a balance between differential privacy and model utility. Specifically, we introduce an enhanced layer-wise relevance propagation process to analyze the relevance of trainable parameters, which can be adapted to the three major categories of PEFT methods. Based on the relevance map generated, we partition the parameter space dimensionally, and develop an adaptive gradient perturbation strategy that adjusts the noise addition to mitigate the adverse impacts of perturbations. Extensive experimental evaluations are conducted to demonstrate that our Rap-FT framework can improve the utility of the fine-tuned model compared to the baseline differentially private fine-tuning methods, while maintaining a comparable level of privacy protection.
Published in: IEEE Transactions on Information Forensics and Security ( Volume: 20)