skip to main content
10.1145/3307630.3342396acmotherconferencesArticle/Chapter ViewAbstractPublication PagessplcConference Proceedingsconference-collections
short-paper

White-Box and Black-Box Test Quality Metrics for Configurable Simulation Models

Published: 09 September 2019 Publication History

Abstract

Simulation models are widely employed to model and simulate complex systems from different domains, such as automotive. These systems are becoming highly configurable to support different users' demands. Testing all of them is impracticable, and thus, cost-effective techniques are mandatory. Costs are usually attributed either to the time it takes to test a configurable system or to its monetary value. Nevertheless, for the case of test effectiveness several quality metrics can be found in the literature. This paper aims at proposing both black-box and white-box test quality metrics for configurable simulation models relying on 150% variability modeling approaches.

References

[1]
Mustafa Al-Hajjaji, Thomas Thüm, Malte Lochau, Jens Meinicke, and Gunter Saake. Effective product-line testing using similarity-based product prioritization. Software & Systems Modeling, 18(1):499--521, 2019.
[2]
Manar H Alalfi, Eric J Rapos, Andrew Stevenson, Matthew Stephan, Thomas R Dean, and James R Cordy. Semi-automatic identification and representation of subsystem variability in simulink models. In 2014 IEEE International Conference on Software Maintenance and Evolution, pages 486--490. IEEE, 2014.
[3]
Aitor Arrieta, Goiuria Sagardui, Leire Etxeberria, and Justyna Zander. Automatic generation of test system instances for configurable cyber-physical systems. Software Quality Journal, 25(3):1041--1083, 2017.
[4]
Aitor Arrieta, Shuai Wang, Ainhoa Arruabarrena, Urtzi Markiegi, Goiuria Sagardui, and Leire Etxeberria. Multi-objective black-box test case selection for cost-effectively testing simulation models. In Proceedings of the Genetic and Evolutionary Computation Conference, pages 1411--1418. ACM, 2018.
[5]
Aitor Arrieta, Shuai Wang, Goiuria Sagardui, and Leire Etxeberria. Search-based test case prioritization for simulation-based testing of cyber-physical system product lines. Journal of Systems and Software, 149:1--34, 2019.
[6]
Danilo Beuche and Jens Weiland. Managing flexibility: Modeling binding-times in simulink. In European Conference on Model Driven Architecture-Foundations and Applications, pages 289--300. Springer, 2009.
[7]
R. Feldt, S. Poulding, D. Clark, and S. Yoo. Test set diameter: Quantifying the diversity of sets of test cases. In 2016 IEEE International Conference on Software Testing, Verification and Validation (ICST), pages 223--233, April 2016.
[8]
Arne Haber, Carsten Kolassa, Peter Manhart, Pedram Mir Seyed Nazari, Bernhard Rumpe, and Ina Schaefer. First-class variability modeling in matlab/simulink. In Proceedings of the Seventh International Workshop on Variability Modelling of Software-intensive Systems, page 4. ACM, 2013.
[9]
Hadi Hemmati, Andrea Arcuri, and Lionel Briand. Achieving scalable modelbased testing through test case diversity. ACM Trans. Softw. Eng. Methodol., 22(1):6:1--6:42, March 2013.
[10]
Michaël Marcozzi, Sébastien Bardin, Nikolai Kosmatov, Mike Papadakis, Virgile Prevosto, and Loïc Correnson. Time to clean your test objectives. In Proceedings of the 40th International Conference on Software Engineering, pages 456--467, 2018.
[11]
Urtzi Markiegi, Aitor Arrieta, Leire Etxeberria, and Goiuria Sagardui. Test case selection using structural coverage in software product lines for time-budget constrained scenarios. In Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, SAC '19, pages 2362--2371, New York, NY, USA, 2019. ACM.
[12]
Martina Marré and Antonia Bertolino. Using spanning sets for coverage testing. IEEE Transactions on Software Engineering, 29(11):974--984, 2003.
[13]
Reza Matinnejad, Shiva Nejati, Lionel Briand, and Thomas Bruckmann. Test generation and test prioritization for simulink models with dynamic behavior. IEEE Transactions on Software Engineering, 2018.
[14]
Reza Matinnejad, Shiva Nejati, and Lionel C Briand. Automated testing of hybrid simulink/stateflow controllers: industrial case studies. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, pages 938--943, 2017.
[15]
Thorsten Pawletta, Artur Schmidt, Bernard P Zeigler, and Umut Durak. Extended variability modeling using system entity structure ontology within matlab/simulink. In Proceedings of the 49th Annual Simulation Symposium, page 22. Society for Computer Simulation International, 2016.
[16]
Goiuria Sagardui, Joseba Agirre, Urtzi Markiegi, Aitor Arrieta, Carlos Fernando Nicolás, and Jose María Martín. Multiplex: A co-simulation architecture for elevators validation. In 2017 IEEE International Workshop of Electronics, Control, Measurement, Signals and their application to Mechatronics, pages 1--6, 2017.
[17]
Shuai Wang, Shaukat Ali, and Arnaud Gotlieb. Minimizing test suites in software product lines using weight-based genetic algorithms. In Proceedings of the 15th annual conference on Genetic and evolutionary computation, pages 1493--1500. ACM, 2013.
[18]
Shin Yoo and Mark Harman. Regression testing minimization, selection and prioritization: a survey. Software Testing, Verification and Reliability, 22(2):67--120, 2012.

Cited By

View all
  • (2021)Dynamic test prioritization of product lines: An application on configurable simulation modelsSoftware Quality Journal10.1007/s11219-021-09571-0Online publication date: 20-Oct-2021

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
SPLC '19: Proceedings of the 23rd International Systems and Software Product Line Conference - Volume B
September 2019
252 pages
ISBN:9781450366687
DOI:10.1145/3307630
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 September 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. product lines
  2. simulation models
  3. test quality metrics

Qualifiers

  • Short-paper

Conference

SPLC 2019

Acceptance Rates

Overall Acceptance Rate 167 of 463 submissions, 36%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)10
  • Downloads (Last 6 weeks)0
Reflects downloads up to 20 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2021)Dynamic test prioritization of product lines: An application on configurable simulation modelsSoftware Quality Journal10.1007/s11219-021-09571-0Online publication date: 20-Oct-2021

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media