Abstract
Existing Automated Service Composition (ASC) approaches typically require inputs to be in a designated form. These, namely tuples, pose challenges due to the significant divergence from the most commonly used and straightforward formats for expressing software requirements. In our previous work, we developed a rule-based approach that necessitated substantial resources for analyzing the content of requirements and establishing appropriate rules. Given the recent successes in field research involving large language models (LLMs)-where significant achievements have been made in real-time automatic text generation tasks-we propose leveraging LLMs for ASC to extract critical tuple-based information. We have created a new dataset to simulate everyday service demands and have established clear guidelines regarding service demand types (e.g., input and output). Moreover, we have implemented an appropriate workflow that optimizes LLMs performance. Our experiments and results demonstrate that our proposed LLMs-based approach not only achieves extraordinary performance and reliability at a lower cost but also outperforms the complex rule-based solutions that were previously employed.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Brown, T., et al.: Language models are few-shot learners. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 1877–1901. Curran Associates, Inc. (2020)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018). https://arxiv.org/abs/1810.04805
Fanjiang, Y., Syu, Y., Ma, S., Kuo, J.: An overview and classification of service description approaches in automated service composition research. IEEE Trans. Serv. Comput. 10(02), 176–189 (2017). https://doi.org/10.1109/TSC.2015.2461538
https://leetcode.com/problemset/, online
Lewis, M., et al.: BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv arXiv preprint arXiv:1910.13461 (2019). https://arxiv.org/abs/1910.13461
Liu, H., et al.: Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning. In: Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., Oh, A. (eds.) Advances in Neural Information Processing Systems, vol. 35, pp. 1950–1965. Curran Associates, Inc. (2022). https://proceedings.neurips.cc/paper_files/paper/2022/file/0cde695b83bd186c1fd456302888454c-Paper-Conference.pdf
Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach (2019)
Manning, C.D., Surdeanu, M., Bauer, J., Finkel, J., Bethard, S.J., McClosky, D.: The Stanford CoreNLP natural language processing toolkit. In: Association for Computational Linguistics (ACL) System Demonstrations, pp. 55–60 (2014). http://www.aclweb.org/anthology/P/P14/P14-5010
Miwa, M., Bansal, M.: End-to-end relation extraction using LSTMs on sequences and tree structures. In: Erk, K., Smith, N.A. (eds.) Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pp. 1105–1116. Association for Computational Linguistics, Berlin, Germany (2016). https://doi.org/10.18653/v1/P16-1105, https://aclanthology.org/P16-1105
Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018). work in progress
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019)
Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. In: arXiv preprint arXiv:1910.10683. arXiv, arXiv (Oct 2019), https://arxiv.org/abs/1910.10683
Syu, Y., Tsao, Y.J., Wang, C.M.: Rule-based extraction of tuple-based service demand from natural language-based software requirement for automated service composition. In: Katangur, A., Zhang, L.J. (eds.) Services Computing - SCC 2021, pp. 1–17. Springer International Publishing, Cham (2022). https://doi.org/10.1007/978-3-030-96566-2_1
Syu, Y., Wang, C.M.: A gap between automated service composition research and software engineering development practice: Service descriptions. In: Zhang, Y., Zhang, L.J. (eds.) Web Services - ICWS 2023, pp. 18–31. Springer Nature Switzerland, Cham (2023). https://doi.org/10.1007/978-3-031-44836-2_2
Touvron, H., Martin, L., Stone, K., et al.: Llama 2: open foundation and fine-tuned chat models. arXiv arXiv:2307.09288 (2023). https://arxiv.org/abs/2307.09288
Wei, J., et al.: Chain-of-thought prompting elicits reasoning in large language models. In: Proceedings of the 36th International Conference on Neural Information Processing Systems, NIPS 2022, Curran Associates Inc., Red Hook, NY, USA (2024)
Zhang, Z., Elkhatib, Y., Elhabbash, A.: NLP-based generation of ontological system descriptions for composition of smart home devices. In: 2023 IEEE International Conference on Web Services (ICWS), pp. 360–370 (2023). https://doi.org/10.1109/ICWS60048.2023.00055
Acknowledgments
This work was partially supported by the National Science and Technology Council, Taiwan, under Grant No. NSTC112-2221-E-001-008. We thank Ming-To Chuang, an intern research assistant at the National Taiwan University, Department of Electrical Engineering, for his valuable assistance with our dataset construction.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Hsu, CJ., Luo, YX., Mao, YC., Wang, CM., Syu, Y. (2025). Extracting Tuple-Based Service Demands with Large Language Models for Automated Service Composition. In: He, S., Zhang, LJ. (eds) Services Computing – SCC 2024. SCF 2024 - SCC 2024 2024. Lecture Notes in Computer Science, vol 15430. Springer, Cham. https://doi.org/10.1007/978-3-031-77000-5_1
Download citation
DOI: https://doi.org/10.1007/978-3-031-77000-5_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-76999-3
Online ISBN: 978-3-031-77000-5
eBook Packages: Computer ScienceComputer Science (R0)