Conclusion
In this paper, we argue that the same rank setting of LoRA will inhibit its potential. In response, we proposed two novel PEFT strategies to improve the capability of LoRA in single-task and multi-task scenarios. We also conducted extensive experiments to show the effectiveness of our proposed method.
References
Wang A, Singh A, Michael J, Hill F, Levy O, Bowman S. GLUE: a multi-task benchmark and analysis platform for natural language understanding. In: Proceedings of 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. 2018, 353–355
Chen L, Wu L, Zhang K, Hong R, Lian D, Zhang Z, Zhou J, Wang M. Improving recommendation fairness via data augmentation. In: Proceedings of the ACM Web Conference 2023. 2023, 1012–1020
Hu E J, Shen Y, Wallis P, Allen-Zhu Z, Li Y, Wang S, Wang L, Chen W. LoRA: low-rank adaptation of large language models. In: Proceedings of the 10th International Conference on Learning Representations. 2021, 1–26
Zhang Q, Chen M, Bukharin A, He P, Cheng Y, Chen W, Zhao T. Adaptive budget allocation for parameter-efficient fine-tuning. In: Proceedings of the 11th International Conference on Learning Representations. 2023, 1–17
Valipour M, Rezagholizadeh M, Kobyzev I, Ghodsi A. DyLoRA: parameter-efficient tuning of pre-trained models using dynamic search-free low-rank adaptation. In: Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. 2023, 3274–3287
Wang Y, Lin Y, Zeng X, Zhang G. MultiLoRA: democratizing LoRA for better multi-task learning. 2023, arXiv preprint arXiv: 2311.11501
Acknowledgements
This research was partially supported by the National Science and Technology Major Project (Grant No. 2023ZD0121103), and the National Natural Science Foundation of China (Grant Nos. 62376086, U23B2031).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Computer interests The authors declare that they have no competing interests or financial conflicts to disclose.
Electronic supplementary material
Rights and permissions
About this article
Cite this article
Zhang, D., Yang, F., Zhang, K. et al. Optimizing low-rank adaptation with decomposed matrices and adaptive rank allocation. Front. Comput. Sci. 19, 195337 (2025). https://doi.org/10.1007/s11704-024-40317-w
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11704-024-40317-w