Skip to main content

Domain-Specific Fine-Tuning of Large Language Models for Interactive Robot Programming

  • Conference paper
  • First Online:
European Robotics Forum 2024 (ERF 2024)

Part of the book series: Springer Proceedings in Advanced Robotics ((SPAR,volume 32))

Included in the following conference series:

  • 64 Accesses

Abstract

Industrial robots are applied in a widening range of industries, but robot programming mostly remains a task limited to programming experts. We propose a natural language-based assistant for programming of advanced, industrial robotic applications and investigate strategies for domain-specific fine-tuning of foundation models with limited data and compute.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chalkidis, I., Fergadiotis, M., Malakasiotis, P., Aletras, N., Androutsopoulos, I.: LEGAL-BERT: the muppets straight out of law school. arXiv preprint arXiv:2010.02559 (2020)

  2. Dettmers, T., Pagnoni, A., Holtzman, A., Zettlemoyer, L.: QLoRA: efficient finetuning of quantized LLMs. In: Advances in Neural Information Processing Systems (2023)

    Google Scholar 

  3. Li, X.L., Liang, P.: Prefix-tuning: optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, vol. 1 Long Paper, pp. 4582–4597. Association for Computational Linguistics (2021)

    Google Scholar 

  4. OpenAI: ChatGPT (2023). https://chat.openai.com

  5. Taori, R., et al.: Alpaca: a strong, replicable instruction-following model (2023)

    Google Scholar 

  6. Touvron, H., et al.: LLaMA: open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023)

  7. Zhang, B., Soh, H.: Large language models as zero-shot human models for human-robot interaction. In: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2023)

    Google Scholar 

  8. Zhang, T., Kishore, V., Wu, F., Weinberger, K.Q., Artzi, Y.: BERTScore: evaluating text generation with BERT. arXiv preprint arXiv:1904.09675 (2020)

  9. Zheng, O., Abdel-Aty, M., Wang, D., Wang, C., Ding, S.: TrafficSafetyGPT: tuning a pre-trained large language model to a domain-specific expert in transportation safety. arXiv preprint arXiv:2307.15311 (2023)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Benjamin Alt .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Alt, B. et al. (2024). Domain-Specific Fine-Tuning of Large Language Models for Interactive Robot Programming. In: Secchi, C., Marconi, L. (eds) European Robotics Forum 2024. ERF 2024. Springer Proceedings in Advanced Robotics, vol 32. Springer, Cham. https://doi.org/10.1007/978-3-031-76424-0_49

Download citation

Publish with us

Policies and ethics