Loading [MathJax]/extensions/MathMenu.js
Parameter-Efficient Fine-Tuning Large Speech Model Based on LoRA | IEEE Conference Publication | IEEE Xplore

Parameter-Efficient Fine-Tuning Large Speech Model Based on LoRA


Abstract:

Large language model achieved impressive and great performance in natural language processing. Inspired by this advancement, large speech model also achieved capabilities...Show More

Abstract:

Large language model achieved impressive and great performance in natural language processing. Inspired by this advancement, large speech model also achieved capabilities on robust speech processing based on large-scale weak supervision or self-supervision. And with this progression, it is difficult for researchers to deploy and fine-tuning these large models with millions even billions of parameters. Besides, we find the performance of large speech models for specific downstream tasks still have potential to improve. Therefore, we study to use low-rank adaptation to fine-tuning large speech model with lower cost and small labelled dataset. We successfully decrease the cost of VRAM to deploy and train the model on a consumer-level device. Meanwhile, we prove the potential of large speech model for specific downstream tasks and achieve a better performance than full-parameter fine-tuning.
Date of Conference: 08-10 May 2024
Date Added to IEEE Xplore: 10 July 2024
ISBN Information:

ISSN Information:

Conference Location: Tianjin, China

Contact IEEE to Subscribe

References

References is not available for this document.