Abstract
The article introduces a method of requirements-service mapping based on large-scale language models, utilizing the significant semantic understanding capability of large language models. It leverages multiple rounds of natural language question-answering to interact with users, achieve the transformation of users’ vague requirements into structured information, and eventually map to specific application services. Through combining large language models with traditional vector searching techniques, the micro-adjustment of large language models is realized for extracting and structuring requirements’ information without retraining or inputting massive data to build context. It presents classification of requirements and definition of service attributes to constrain and regulate content of user requirements, providing rules for large language models to express non-structured raw requirements into clear structured information. Upon obtaining the structured information, word embedding is further used to vectorize service information and requirements. The service mapping process is completed through vector matching algorithms, realizing the ultimate transformation from requirements to services. Finally, through industrial application service template, case studies have been conducted to analyze the accuracy of mapping under different requirement rules, thus ultimately demonstrating the effectiveness of the requirement-mapping method proposed in this article.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data Availability and Access
The collected service data is used for proprietary systems and is not available.
References
Agarwal N, Sikka G, Awasthi LK (2020) Enhancing web service clustering using length feature weight method for service description document vector space representation. Expert Syst Appl 161:113682. https://doi.org/10.1016/j.eswa.2020.113682
Agarwal N, Sikka G, Awasthi LK (2024) Integrating semantic similarity with dirichlet multinomial mixture model for enhanced web service clustering. Knowl Inf Syst 66(4):2327–2353. https://doi.org/10.1007/s10115-023-02034-x
Arya S, Mount DM, Netanyahu NS et al (1998) An optimal algorithm for approximate nearest neighbor searching fixed dimensions. J ACM 45(6):891–92. https://doi.org/10.1145/293347.293348
Asudani DS, Nagwani NK, Singh P (2023) Impact of word embedding models on text analytics in deep learning environment: a review. Artif Intell Rev 56(9):10345–1042. https://doi.org/10.1007/s10462-023-10419-1
Bajaj D, Goel A, Gupta SC et al (2022) Muce: a multilingual use case model extractor using gpt-3. Int J Inf Technol 14(3):1543–155. https://doi.org/10.1007/s41870-022-00884-2
Bao T, Zhang C (2023) Extracting chinese information with chatgpt:an empirical study by three typical tasks. Data Anal Knowl Discovery 7(1–11)
Bharadiya J (2023) A comprehensive survey of deep learning techniques natural language processing. European J Technol 7(1):58–66. https://doi.org/10.47672/ejt.1473
Bianchi F, Terragni S, Hovy D (2021) Pre-training is a hot topic: Contextualized document embeddings improve topic coherence. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computational Linguistics, Online, pp 759–766. https://doi.org/10.18653/v1/2021.acl-short.96, https://aclanthology.org/2021.acl-short.96
Biswas S, Logan NS, Davies LN et al (2023) Assessing the utility of chatgpt as an artificial intelligence-based large language model for information to answer questions on myopia. Ophthalmic Physiol Opt 43(6):1562–157. https://doi.org/10.1111/opo.13207
Bombieri M, Meli D, Dall’Alba D et al (2023) Mapping natural language procedures descriptions to linear temporal logic templates: an application in the surgical robotic domain. Appl Intell 53(22):26351–26363. https://doi.org/10.1007/s10489-023-04882-0
Brown T, Mann B, Ryder N, et al (2020) Language models are few-shot learners. In: Larochelle H, Ranzato M, Hadsell R, et al (eds) Advances in Neural Information Processing Systems, vol 33. Curran Associates, Inc., pp 1877–1901, https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf
Bu K, Liu Y, Ju X (2024) Efficient utilization of pre-trained models: a review of sentiment analysis via prompt learning. Knowl-Based Syst 283:11114. https://doi.org/10.1016/j.knosys.2023.111148
Cao X, Liu Y (2023) Relmkg: reasoning with pre-trained language models and knowledge graphs for complex question answering. Appl Intell 53(10):12032–1204. https://doi.org/10.1007/s10489-022-04123-w
Das A, Balabantaray RC (2019) Mynlidb: a natural language interface to database. In: 2019 International Conference on Information Technology (ICIT), pp 234–238. https://doi.org/10.1109/ICIT48102.2019.00048
Guodong L, Zhang Q, Ding Y et al (2020) Research on service discovery methods based on knowledge graph. IEEE Access 8:138934–138943. https://doi.org/10.1109/ACCESS.2020.3012670
Haleem A, Javaid M, Singh RP (2022) An era of chatgpt as a significant futuristic support tool: A study on features, abilities, and challenges. BenchCouncil Transactions on Benchmarks, Standards and Evaluations 2(4):10008. https://doi.org/10.1016/j.tbench.2023.100089
Horkoff J (2022) Keynote - requirements engineering for machine learning: Non-functional requirements as core functions. In: 2022 IEEE 30th International Requirements Engineering Conference Workshops (REW), pp 141–141. https://doi.org/10.1109/REW56159.2022.00034
Jin D, Jin Z, Chen X et al (2024) Chatmodeler: a human-machine collaborative and iterative requirements elicitation and modeling approach via large language models. J Comput Res Develop 61(02):338–350
Kim JK, Chua M, Rickard M et al (2023) Chatgpt and large language model (LLM) chatbots: the current state of acceptability and a proposal for guidelines on utilization in academic medicine. J Pediatr Urol 19(5):598–604
Kojima T, Gu SS, Reid M, et al (2022) Large language models are zero-shot reasoners. In: Koyejo S, Mohamed S, Agarwal A, et al (eds) Advances in Neural Information Processing Systems, vol 35. Curran Associates, Inc., pp 22199–22213, https://proceedings.neurips.cc/paper_files/paper/2022/file/8bb0d291acd4acf06ef112099c16f326-Paper-Conference.pdf
Leong IT, Barbosa R (2023) Translating natural language requirements to formal specifications: A study on gpt and symbolic nlp. In: 2023 53rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), pp 259–262. https://doi.org/10.1109/DSN-W58399.2023.00065
Li R, Rongcheng P, SHEN J, et al (2024) Knowledge distillation of large language models based on chain of thought. J Data Acquisition Process 39(03):547–558. https://doi.org/10.16337/j.1004-9037.2024.03.004
Liu P, Yuan W, Fu J et al (2023) Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Comput Surv 55(9). https://doi.org/10.1145/3560815
Liu Y, Han T, Ma S et al (2023) Summary of chatgpt-related research and perspective towards the future of large language models. Meta-Radiology 1(2):100017. https://doi.org/10.1016/j.metrad.2023.100017
Lu X, Deng Y, Sun T et al (2022) Mkpm: multi keyword-pair matching for natural language sentences. Appl Intell 52(2):1878–1892. https://doi.org/10.1007/s10489-021-02306-5
Malkov Y, Ponomarenko A, Logvinov A et al (2014) Approximate nearest neighbor algorithm based on navigable small world graphs. Inf Syst 45:61–6. https://doi.org/10.1016/j.is.2013.10.006
Malkov YA, Yashunin DA (2020) Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. IEEE Trans Pattern Anal Mach Intell 42(4):824–83. https://doi.org/10.1109/TPAMI.2018.2889473
Mihalcea R, Tarau P (2004) Textrank: Bringing order into text. In: Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pp 404–411, https://aclanthology.org/W04-3252
Montagna S, Mariani S, Gamberini E et al (2020) Complementing agents with cognitive services: A case study in healthcare. J Med Syst 44(10):18. https://doi.org/10.1007/s10916-020-01621-7
OpenAI (2024) Openai cookbook. https://github.com/openai/openai-cookbook, accessed: 2024-06-22
Roman (1985) A taxonomy of current issues in requirements engineering. Computer 18(4):14–23. https://doi.org/10.1109/MC.1985.1662861
Saha BK, Gordon P, Gillbrand T (2023) Nlinq: a natural language interface for querying network performance. Appl Intell 53(23):28848–28864. https://doi.org/10.1007/s10489-023-05043-z
Strubell E, Ganesh A, Mccallum A (2019) Energy and policy considerations for deep learning in nlp. pp 3645–3650. https://doi.org/10.18653/v1/P19-1355
Sun Q, Han J, Ma D (2021) A framework for service semantic description based on knowledge graph. Electronics 10(9):101. https://doi.org/10.3390/electronics10091017
Taherdoost H (2021) Data collection methods and tools for research; a step-by-step guide to choose data collection technique for academic and business research projects authors. Post-Print hal-03741834, HAL, https://ideas.repec.org/p/hal/journl/hal-03741834.html
Wadhwa S, Amir S, Wallace BC (2023) Revisiting relation extraction in the era of large language models. Proc Conf Assoc Comput Linguist Meet 2023:15566–15589
Wang X, Wei J, Schuurmans D, et al (2023) Self-consistency improves chain of thought reasoning in language models. In: The Eleventh International Conference on Learning Representations, https://openreview.net/forum?id=1PL1NIMMrw
Wang Z, Zhang Z, Traverso A et al (2024) Assessing the role of gpt-4 in thyroid ultrasound diagnosis and treatment recommendations: enhancing interpretability with a chain of thought approach. Quant Imaging Med Surg 14(2):1602–1615
Wei J, Wang X, Schuurmans D, et al (2022) Chain-of-thought prompting elicits reasoning in large language models. In: Koyejo S, Mohamed S, Agarwal A, et al (eds) Advances in Neural Information Processing Systems, vol 35. Curran Associates, Inc., pp 24824–24837, https://proceedings.neurips.cc/paper_files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf
Xu HD, Mao XL, Yang P et al (2024) Cross-domain coreference modeling in dialogue state tracking with prompt learning. Knowl-Based Syst 283:11118. https://doi.org/10.1016/j.knosys.2023.111189
Xue S, Ren F (2021) Intent-enhanced attentive bert capsule network for zero-shot intention detection. Neurocomputing 458:1–13. https://doi.org/10.1016/j.neucom.2021.05.085
Yu Y, Zeng J, Yao J, et al (2020) Web service discovery based on knowledge graph and similarity network. In: 2020 IEEE World Congress on Services (SERVICES), pp 231–236. https://doi.org/10.1109/SERVICES48979.2020.00054
Zaki-Ismail A, Osama M, Abdelrazek M, et al (2021) Arf: Automatic requirements formalisation tool. In: 2021 IEEE 29th International Requirements Engineering Conference (RE), pp 440–441. https://doi.org/10.1109/RE51729.2021.00060
Zhang B, Tu Z, Wang C et al (2024) Requirements elicitation and response generation for conversational services. Appl Intell 54(7):5576–559. https://doi.org/10.1007/s10489-024-05454-6
Zhou D, Schärli N, Hou L, et al (2023) Least-to-most prompting enables complex reasoning in large language models. In: The Eleventh International Conference on Learning Representations, https://openreview.net/forum?id=WZH7099tgfM
Acknowledgements
This work was supported by National Key Research and Development Program of China (No.2022YFB330570) and Shanghai Science Innovation Action Plan(No.21511104302).
Author information
Authors and Affiliations
Contributions
All the authors contributed equally to this work. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing Interests
The authors have no competing interests to declare that are relevant to the content of this article.
Ethical and Informed Consent for Data Used
This article does not contain studies with human participants or animals. As such, informed consent forms are not applicable to this article.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Ruixiang, L., Qiujun, D., Xianhui, L. et al. Requirement-service mapping technology in the industrial application field based on large language models. Appl Intell 55, 70 (2025). https://doi.org/10.1007/s10489-024-05969-y
Accepted:
Published:
DOI: https://doi.org/10.1007/s10489-024-05969-y