Skip to main content

Advertisement

Log in

Optimizing low-rank adaptation with decomposed matrices and adaptive rank allocation

  • Letter
  • Published:
Frontiers of Computer Science Aims and scope Submit manuscript

Conclusion

In this paper, we argue that the same rank setting of LoRA will inhibit its potential. In response, we proposed two novel PEFT strategies to improve the capability of LoRA in single-task and multi-task scenarios. We also conducted extensive experiments to show the effectiveness of our proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

References

  1. Wang A, Singh A, Michael J, Hill F, Levy O, Bowman S. GLUE: a multi-task benchmark and analysis platform for natural language understanding. In: Proceedings of 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. 2018, 353–355

    Chapter  Google Scholar 

  2. Chen L, Wu L, Zhang K, Hong R, Lian D, Zhang Z, Zhou J, Wang M. Improving recommendation fairness via data augmentation. In: Proceedings of the ACM Web Conference 2023. 2023, 1012–1020

    Chapter  Google Scholar 

  3. Hu E J, Shen Y, Wallis P, Allen-Zhu Z, Li Y, Wang S, Wang L, Chen W. LoRA: low-rank adaptation of large language models. In: Proceedings of the 10th International Conference on Learning Representations. 2021, 1–26

    Google Scholar 

  4. Zhang Q, Chen M, Bukharin A, He P, Cheng Y, Chen W, Zhao T. Adaptive budget allocation for parameter-efficient fine-tuning. In: Proceedings of the 11th International Conference on Learning Representations. 2023, 1–17

    Google Scholar 

  5. Valipour M, Rezagholizadeh M, Kobyzev I, Ghodsi A. DyLoRA: parameter-efficient tuning of pre-trained models using dynamic search-free low-rank adaptation. In: Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. 2023, 3274–3287

    Chapter  Google Scholar 

  6. Wang Y, Lin Y, Zeng X, Zhang G. MultiLoRA: democratizing LoRA for better multi-task learning. 2023, arXiv preprint arXiv: 2311.11501

Download references

Acknowledgements

This research was partially supported by the National Science and Technology Major Project (Grant No. 2023ZD0121103), and the National Natural Science Foundation of China (Grant Nos. 62376086, U23B2031).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kun Zhang.

Ethics declarations

Computer interests The authors declare that they have no competing interests or financial conflicts to disclose.

Electronic supplementary material

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, D., Yang, F., Zhang, K. et al. Optimizing low-rank adaptation with decomposed matrices and adaptive rank allocation. Front. Comput. Sci. 19, 195337 (2025). https://doi.org/10.1007/s11704-024-40317-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11704-024-40317-w