Skip to main content

Audio-LLM: Activating the Capabilities of Large Language Models to Comprehend Audio Data

  • Conference paper
  • First Online:
Advances in Neural Networks – ISNN 2024 (ISNN 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14827))

Included in the following conference series:

  • 986 Accesses

Abstract

We introduce Audio-LLM (Link to our work: https://github.com/orallove/audio-LLM), a large language model that improves audio question-answering (AQA) systems and activates the capabilities of large language models to comprehend audio data. Our task entails introducing an encoding method that effectively transforms audio data into embedded representations, enabling LLMs to comprehend and process the information contained within the audio. By undergoing a series of fine-tuning stages, we establish alignment between audio and text, allowing LLMs to leverage both auditory and textual prompts. This alignment enables the model to achieve remarkable performance in automatic speech recognition (ASR), emotion recognition (ER), English-to-Chinese translation (En2Zh), music captioning (MC), and so on, demonstrating its versatility across various downstream applications. In addition, our model can be trained efficiently. During training, we only need to update approximately 20 million parameters, which represent about 0.27% of the entire Audio-LLM model. Furthermore, the discussion part highlights the model’s adaptability to zero-shot tasks, positioning Audio-LLM as a significant advancement with far-reaching implications for generalized hearing AI.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Brown, T.B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., et al.: Language models are few-shot learners. In: NeurIPS (2020)

    Google Scholar 

  2. Du, Z., et al.: GLM: general language model pretraining with autoregressive blank infilling. ACL (2022)

    Google Scholar 

  3. Anil, R., Dai, A.M., Firat, O., Johnson, M., Lepikhin, D., et al.: PaLM 2 technical report. arXiv preprint, arXiv:2305.10403 (2023)

  4. Chiang, W.L., et al.: Vicuna: an open-source chatbot impressing GPT-4 with 90% * ChatGPT quality (2023). https://lmsys.org/blog/2023-03-30-vicuna/

  5. OpenAI: GPT-4 technical report. arXiv preprint, arXiv:2303.08774 (2023)

  6. Touvron, H., Lavril, T., Izacard, G., Martinet, X., et al.: LLaMA: open and efficient foundation language models. arXiv preprint, arXiv:2302.13971 (2023)

  7. Chen, F., et al.: LLaMA: open and efficient foundation language models. arXiv preprint, arXiv:2305.04160 (2023)

  8. Gong, Y., Luo, H., Liu, A.H., Karlinsky, L., Glass, J.: Listen, think, and understand. arXiv preprint, arXiv:2305.10790 (2023)

  9. Rubenstein, P.K., Asawaroengchai, C., Nguyen, D.D., Bapna, A., et al.: AudioPaLM: a large language model that can speak and listen. arXiv preprint, arXiv:2306.12925 (2023)

  10. Tang, C., Yu, W., Sun, G., et al.: SALMONN: towards generic hearing abilities for large language models. arXiv preprint, arXiv:2310.13289 (2023)

  11. Radford, A., Kim, J.W., Xu, T., Brockman, G., McLeavey, C., Sutskever, I.: Robust speech recognition via large-scale weak supervision. In: ICML (2023)

    Google Scholar 

  12. Chen, S., et al.: BEATs: audio pre-training with acoustic tokenizers. In: ICML (2023)

    Google Scholar 

  13. Li, J., Li, D., Savarese, S., Hoi, S.: BLIP-2: bootstrapping language-image pre-training with frozen image encoders and large language models. In: ICML (2023)

    Google Scholar 

  14. Vaswani, A., et al.: Attention is all you need. In: NeurIPS (2017)

    Google Scholar 

  15. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners (2019)

    Google Scholar 

  16. Hu, E.J., et al.: LoRA: low-rank adaptation of large language models. In: ICLR (2022)

    Google Scholar 

  17. Fathullah, Y., Wu, C., Lakomkin, E., et al.: Prompting large language models with speech recognition abilities. arXiv preprint, arXiv:2307.11795 (2023)

  18. Borsos, Z., Marinier, R., Vincent, D., et al.: AudioLM: a language modeling approach to audio generation. arXiv preprint, arXiv:2209.03143 (2023)

  19. Panayotov, V., Chen, G., Povey, D., Khudanpur, S.: LibriSpeech: an ASR corpus based on public domain audio books. In: ICASSP (2015)

    Google Scholar 

  20. Wang, C., Wu, A., Gu, J., Pino, J.: CoVoST 2 and massively multilingual speech translation. In: Interspeech (2021)

    Google Scholar 

  21. Busso, C., Bulut, M., Lee, C.C., et al.: IEMOCAP: interactive emotional dyadic motion capture database. Lang. Resour. Eval. 42, 335–359 (2008)

    Article  Google Scholar 

  22. Agostinelli, A., Denk, T.I., Borsos, Z., Engel, J., et al.: MusicLM: generating Music From Text. arXiv preprint, arXiv:2301.11325 (2015)

  23. Baevski, A., Zhou, H., Mohamed, A., et al.: wav2vec 2.0: a framework for self-supervised learning of speech representations. In: NeurIPS (2020)

    Google Scholar 

Download references

Acknowledgements

This work received support from the Huawei Intelligent Foundation and the National Natural Science Foundation of China under Grant U2341228.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dongting Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, D., Tang, C., Liu, H. (2024). Audio-LLM: Activating the Capabilities of Large Language Models to Comprehend Audio Data. In: Le, X., Zhang, Z. (eds) Advances in Neural Networks – ISNN 2024. ISNN 2024. Lecture Notes in Computer Science, vol 14827. Springer, Singapore. https://doi.org/10.1007/978-981-97-4399-5_13

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-4399-5_13

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-4398-8

  • Online ISBN: 978-981-97-4399-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics