Abstract
Industrial robots are applied in a widening range of industries, but robot programming mostly remains a task limited to programming experts. We propose a natural language-based assistant for programming of advanced, industrial robotic applications and investigate strategies for domain-specific fine-tuning of foundation models with limited data and compute.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Chalkidis, I., Fergadiotis, M., Malakasiotis, P., Aletras, N., Androutsopoulos, I.: LEGAL-BERT: the muppets straight out of law school. arXiv preprint arXiv:2010.02559 (2020)
Dettmers, T., Pagnoni, A., Holtzman, A., Zettlemoyer, L.: QLoRA: efficient finetuning of quantized LLMs. In: Advances in Neural Information Processing Systems (2023)
Li, X.L., Liang, P.: Prefix-tuning: optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, vol. 1 Long Paper, pp. 4582–4597. Association for Computational Linguistics (2021)
OpenAI: ChatGPT (2023). https://chat.openai.com
Taori, R., et al.: Alpaca: a strong, replicable instruction-following model (2023)
Touvron, H., et al.: LLaMA: open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023)
Zhang, B., Soh, H.: Large language models as zero-shot human models for human-robot interaction. In: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2023)
Zhang, T., Kishore, V., Wu, F., Weinberger, K.Q., Artzi, Y.: BERTScore: evaluating text generation with BERT. arXiv preprint arXiv:1904.09675 (2020)
Zheng, O., Abdel-Aty, M., Wang, D., Wang, C., Ding, S.: TrafficSafetyGPT: tuning a pre-trained large language model to a domain-specific expert in transportation safety. arXiv preprint arXiv:2307.15311 (2023)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Alt, B. et al. (2024). Domain-Specific Fine-Tuning of Large Language Models for Interactive Robot Programming. In: Secchi, C., Marconi, L. (eds) European Robotics Forum 2024. ERF 2024. Springer Proceedings in Advanced Robotics, vol 32. Springer, Cham. https://doi.org/10.1007/978-3-031-76424-0_49
Download citation
DOI: https://doi.org/10.1007/978-3-031-76424-0_49
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-76423-3
Online ISBN: 978-3-031-76424-0
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)