skip to main content
10.1145/3567512.3567534acmconferencesArticle/Chapter ViewAbstractPublication PagessplashConference Proceedingsconference-collections
research-article

Neural Language Models and Few Shot Learning for Systematic Requirements Processing in MDSE

Published: 01 December 2022 Publication History

Abstract

Systems engineering, in particular in the automotive domain, needs to cope with the massively increasing numbers of requirements that arise during the development process. The language in which requirements are written is mostly informal and highly individual. This hinders automated processing of requirements as well as the linking of requirements to models. Introducing formal requirement notations in existing projects leads to the challenge of translating masses of requirements and the necessity of training for requirements engineers. In this paper, we derive domain-specific language constructs helping us to avoid ambiguities in requirements and increase the level of formality. The main contribution is the adoption and evaluation of few-shot learning with large pretrained language models for the automated translation of informal requirements to structured languages such as a requirement DSL.

References

[1]
Waad Alhoshan, Liping Zhao, Alessio Ferrari, and Keletso J. Letsholo. 2022. A Zero-Shot Learning Approach to Classifying Requirements: A Preliminary Study. In Requirements Engineering: Foundation for Software Quality, Vincenzo Gervasi and Andreas Vogelsang (Eds.). Springer International Publishing, Cham. 52–59. isbn:978-3-030-98464-9
[2]
Abdallah Atouani, Jörg Christian Kirchhof, Evgeny Kusmenko, and Bernhard Rumpe. 2021. Artifact and Reference Models for Generative Machine Learning Frameworks and Build Systems. In Proceedings of the 20th ACM SIGPLAN International Conference on Generative Programming: Concepts and Experiences (GPCE ’21). ACM SIGPLAN, 55–68.
[3]
Jan Steffen Becker, Vincent Bertram, Tom Bienmüller, Udo Brockmeyer, Heiko Dörr, Thomas Peikenkamp, and Tino Teige. 2018. Interoperable Toolchain for Requirements-Driven Model-Based Development. In 9th European Congress Embedded Real Time Software and Systems ERTS ’2018.
[4]
Vincent Bertram, Miriam Boß, Evgeny Kusmenko, Imke Helene Nachmann, Bernhard Rumpe, Danilo Trotta, and Louis Wachtmeister. 2022. Technical Report on Neural Language Models and Few-Shot Learning for Systematic Requirements Processing in MDSE. https://doi.org/10.48550/ARXIV.2211.09084
[5]
Vincent Bertram, Shahar Maoz, Jan Oliver Ringert, Bernhard Rumpe, and Michael von Wenckstern. 2017. Component and Connector Views in Practice: An Experience Report. In MODELS’17.
[6]
Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, and Amanda Askell. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
[7]
Antonio Buchhiarone, Jordi Cabot, Richard F. Paige, and Alfonso Pierantonio. 2020. Grand Challenges in Model-Driven Engineering: An Analysis of the State of Research. Software and Systems Modeling, 19, 1 (2020).
[8]
Jane Cleland-Huang, Sepideh Mazrouee, Huang Liguo, and Dan Port. 2007. nfr. https://doi.org/10.5281/zenodo.268542
[9]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
[10]
Imke Drave, Akradii Gerasimov, Judith Michael, Lukas Netz, Bernhard Rumpe, and Simon Varga. 2021. A Methodology for Retrofitting Generative Aspects in Existing Applications. Journal of Object Technology, 20 (2021), November, 1–24. issn:1660-1769 https://doi.org/10.5381/jot.2021.20.2.a7
[11]
I. Drave, T. Greifenberg, S. Hillemacher, S. Kriebel, E. Kusmenko, M. Markthaler, P. Orth, K. S. Salman, J. Richenhagen, B. Rumpe, C. Schulze, M. Wenckstern, and A. Wortmann. 2019. SMArDT modeling for automotive software testing. Software: Practice and Experience, 49, 2 (2019), February, 301–328.
[12]
Imke Drave, Timo Henrich, Katrin Hölldobler, Oliver Kautz, Judith Michael, and Bernhard Rumpe. 2020. Modellierung, Verifikation und Synthese von validen Planungszuständen für Fernsehausstrahlungen. In Modellierung 2020.
[13]
Imke Drave, Oliver Kautz, Judith Michael, and Bernhard Rumpe. 2019. Semantic Evolution Analysis of Feature Models. In SPLC’19. ACM, 245–255.
[14]
Robert France and Bernhard Rumpe. 2007. Model-driven Development of Complex Software: A Research Roadmap. Future of Software Engineering (FOSE ’07), May, 37–54.
[15]
David Harel and Bernhard Rumpe. 2004. Meaningful Modeling: What’s the Semantics of ”Semantics”? IEEE Computer, 37, 10 (2004).
[16]
Tobias Hey, Jan Keim, Anne Koziolek, and Walter F. Tichy. 2020. NoRBERT: Transfer learning for requirements classification. In 2020 IEEE 28th International Requirements Engineering Conference (RE). 169–179.
[17]
Katharina Juhnke, Alexander Nikic, and Matthias Tichy. 2021. Clustering Natural Language Test Case Instructions as Input for Deriving Automotive Testing DSLs. The Journal of Object Technology, 20, 3 (2021).
[18]
Nils Kaminski, Evgeny Kusmenko, and Bernhard Rumpe. 2019. Modeling Dynamic Architectures of Self-Adaptive Cooperative Systems. The Journal of Object Technology, 18, 2 (2019), July, 1–20. https://doi.org/10.5381/jot.2019.18.2.a2 The 15th European Conference on Modelling Foundations and Applications
[19]
Hendrik Kausch, Mathias Pfeiffer, Deni Raco, and Bernhard Rumpe. 2020. An Approach for Logic-based Knowledge Representation and Automated Reasoning over Underspecification and Refinement in Safety-Critical Cyber-Physical Systems. In Combined Proceedings of the Workshops at Software Engineering 2020. 2581, CEUR Workshop Proceedings.
[20]
Evgeny Kusmenko, Sebastian Nickels, Svetlana Pavlitskaya, Bernhard Rumpe, and Thomas Timmermanns. 2019. Modeling and Training of Neural Processing Systems. In Conference on Model Driven Engineering Languages and Systems (MODELS’19). IEEE, 283–293.
[21]
Evgeny Kusmenko, Svetlana Pavlitskaya, Bernhard Rumpe, and Sebastian Stüber. 2019. On the Engineering of AI-Powered Systems. In ASE’19. Software Engineering Intelligence Workshop (SEI’19), Lisa O’Conner (Ed.). IEEE, 126–133.
[22]
Evgeny Kusmenko, Alexander Roth, Bernhard Rumpe, and Michael von Wenckstern. 2017. Modeling Architectures of Cyber-Physical Systems. In ECMFA’17 (LNCS 10376). Springer, 34–50.
[23]
Grischa Liebel, Matthias Tichy, and Eric Knauss. 2019. Use, potential, and showstoppers of models in automotive requirements engineering.
[24]
Grzegorz Loniewski, Emilio Insfran, and Silvia Abrahão. 2010. A Systematic Review of the Use of Requirements Engineering Techniques in Model-Driven Development. In Model Driven Engineering Languages and Systems. MODELS’2010.
[25]
Garima Malik, Mucahit Cevik, Swayami Bera, Savas Yildirim, Devang Parikh, and Ayse Basar. 2022. Software requirement-specific entity extraction using transformer models. In The 35th Canadian Conference on Artificial Intelligence.
[26]
Anna Perini, Angelo Susi, and Paolo Avesani. 2013. A Machine Learning Approach to Software Requirements Prioritization. IEEE Transactions on Software Engineering, 39, 4 (2013).
[27]
Klaus Pohl and Chris Rupp. 2021. Basiswissen requirements engineering: Aus-und Weiterbildung nach IREB-Standard zum certified professional for requirements engineering foundation level.
[28]
Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training.
[29]
Lin Shi, Mingzhe Xing, Mingyang Li, Yawen Wang, Shoubin Li, and Qing Wang. 2020. Detection of Hidden Feature Requests from Massive Chat Messages via Deep Siamese Network. In 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE). 641–653.
[30]
Amit Kumar Tyagi and N. Sreenath. 2021. Cyber Physical Systems: Analyses, challenges and possible solutions. Internet of Things and Cyber-Physical Systems, 1 (2021).
[31]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. 5998–6008.
[32]
Ben Wang. 2021. Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX. https://github.com/kingoflolz/mesh-transformer-jax
[33]
Ben Wang and Aran Komatsuzaki. 2021. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. https://github.com/kingoflolz/mesh-transformer-jax

Cited By

View all
  • (2025)On the use of large language models in model-driven engineeringSoftware and Systems Modeling10.1007/s10270-025-01263-8Online publication date: 31-Jan-2025
  • (2024)Exploring Dependencies Among Inconsistencies to Enhance the Consistency Maintenance of Models2024 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)10.1109/SANER60148.2024.00023(147-158)Online publication date: 12-Mar-2024
  • (2024)Engineering Safety Requirements for Autonomous Driving with Large Language Models2024 IEEE 32nd International Requirements Engineering Conference (RE)10.1109/RE59067.2024.00029(218-228)Online publication date: 24-Jun-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
SLE 2022: Proceedings of the 15th ACM SIGPLAN International Conference on Software Language Engineering
November 2022
278 pages
ISBN:9781450399197
DOI:10.1145/3567512
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 December 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. few-shot learning
  2. model-driven engineering
  3. model-driven requirements engineering
  4. natural language processing

Qualifiers

  • Research-article

Funding Sources

  • German Federal Ministry for Economic Affiars and Climate Action

Conference

SLE '22
Sponsor:

Upcoming Conference

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)58
  • Downloads (Last 6 weeks)5
Reflects downloads up to 01 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2025)On the use of large language models in model-driven engineeringSoftware and Systems Modeling10.1007/s10270-025-01263-8Online publication date: 31-Jan-2025
  • (2024)Exploring Dependencies Among Inconsistencies to Enhance the Consistency Maintenance of Models2024 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)10.1109/SANER60148.2024.00023(147-158)Online publication date: 12-Mar-2024
  • (2024)Engineering Safety Requirements for Autonomous Driving with Large Language Models2024 IEEE 32nd International Requirements Engineering Conference (RE)10.1109/RE59067.2024.00029(218-228)Online publication date: 24-Jun-2024
  • (2024)Siamese Neural Networks Method for Semantic Requirements Similarity DetectionIEEE Access10.1109/ACCESS.2024.346963612(140932-140947)Online publication date: 2024
  • (2023)Leveraging Natural Language Processing for a Consistency Checking Toolchain of Automotive Requirements2023 IEEE 31st International Requirements Engineering Conference (RE)10.1109/RE57278.2023.00029(212-222)Online publication date: Sep-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media