skip to main content
10.1145/3530019.3535345acmotherconferencesArticle/Chapter ViewAbstractPublication PageseaseConference Proceedingsconference-collections
research-article

Classification of Software Engineering Contributions to Support Validation Techniques and Research Applications

Published: 13 June 2022 Publication History

Abstract

Science and thereby software engineering (SE) research enables progress and innovation. SE research has high economic relevance and serves as an enabler for innovations in other research fields as well. Researchers in this field perform many activities and face corresponding challenges. They have to design and conduct research, while deciding on an appropriate method design for a scientific or practical problem. They have to review research while critically assessing research outcomes in a systematic and comparable way. They have to search for research outcomes to find research gaps to build upon. Likewise, by finding related information, they are able to justify relevance of own research work. These searching processes are done via keyword-based searches while struggling with information overload due to immensely growing number of SE papers published every year. Thus, there is the need for conceptual approaches, automated mechanisms, and a rethinking about scholarly communication to support these research activities and challenges in an effective, efficient, and comprehensible way. There are a variety of available classifications for research methods in literature. Additionally, empirical standards for research methods are continuously developed and adapted from other research fields. However, there is no consistency in the covered research methods, nor in the used terminology as well as no systematic approach to structure this body of research knowledge (e.g., template research questions, research methods, validity threats, replication types). Therefore, this doctoral thesis will provide a unified classification scheme to describe the characteristics of SE research. This further enables classification of SE papers and a corresponding queryable knowledge management system and, thus, advance state of practice.

References

[1]
Sofia Ananieva, Sandra Greiner, Thomas Kühn, Jacob Krüger, Lukas Linsbauer, Sten Grüner, Timo Kehrer, Heiko Klare, Anne Koziolek, Henrik Lönn, Sebastian Krieter, Christoph Seidl, S. Ramesh, Ralf Reussner, and Bernhard Westfechtel. 2020. A Conceptual Model for Unifying Variability in Space and Time. In 24th ACM Conference on Systems and Software Product Line: Volume A (Montreal, Quebec, Canada) (SPLC ’20). Association for Computing Machinery, New York, NY, USA, Article 15, 12 pages. https://doi.org/10.1145/3382025.3414955
[2]
Deepika Badampudi, Claes Wohlin, and Tony Gorschek. 2019. An Evaluation of Knowledge Translation in Software Engineering. In 2019 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM 2019, Porto de Galinhas, Recife, Brazil, September 19-20, 2019. IEEE, 1–6. https://doi.org/10.1109/ESEM.2019.8870165
[3]
Victor R. Basili Gianluigi Caldiera and H. Dieter Rombach. 1994. The goal question metric approach. Encyclopedia of Software Engineering(1994), 528–532.
[4]
Bruno Cartaxo, Gustavo Pinto, Elton Vieira, and Sérgio Soares. 2016. Evidence Briefings: Towards a Medium to Transfer Knowledge from Systematic Reviews to Practitioners. In Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM 2016, Ciudad Real, Spain, September 8-9, 2016. ACM Association for Computing Machinery, 57:1–57:10. https://doi.org/10.1145/2961111.2962603
[5]
R. Dawson, P. Bones, B.J. Oates, P. Brereton, M. Azuma, and M.L. Jackson. 2003. Empirical methodologies in software engineering. In Eleventh Annual International Workshop on Software Technology and Engineering Practice. 52–58. https://doi.org/10.1109/STEP.2003.9
[6]
Steve Easterbrook, Janice Singer, Margaret-Anne D. Storey, and Daniela E. Damian. 2008. Selecting Empirical Methods for Software Engineering Research. In Guide to Advanced Empirical Software Engineering, Forrest Shull, Janice Singer, and Dag I. K. Sjøberg (Eds.). Springer, 285–311. https://doi.org/10.1007/978-1-84800-044-5_11
[7]
Emelie Engström, Margaret-Anne Storey, Per Runeson, Martin Höst, and Maria Teresa Baldassarre. 2020. How software engineering research aligns with design science: a review. Empirical Software Engineering 25, 4 (July 2020), 2630–2660. https://doi.org/10.1007/s10664-020-09818-7
[8]
Katia Romero Felizardo and Jeffrey C. Carver. 2020. Automating Systematic Literature Review. In Contemporary Empirical Methods in Software Engineering, Michael Feldererand Guilherme Horta Travassos (Eds.). Springer, 327–355. https://doi.org/10.1007/978-3-030-32489-6_12
[9]
Vahid Garousi and João M. Fernandes. 2016. Highly-cited papers in software engineering: The top-100. Information and Software Technology 71 (2016), 108–128. https://doi.org/10.1016/j.infsof.2015.11.003
[10]
Robert L. Glass, Iris Vessey, and Venkataraman Ramesh. 2002. Research in software engineering: an analysis of the literature. Information and Software Technology 44, 8 (2002), 491–506. https://doi.org/10.1016/S0950-5849(02)00049-6
[11]
Giancarlo Guizzardi, Luís Ferreira Pires, and Marten van Sinderen. 2005. An Ontology-Based Approach for Evaluating the Domain Appropriateness and Comprehensibility Appropriateness of Modeling Languages. In International Conference on Model Driven Engineering Languages and Systems(MODELS). Springer. https://doi.org/10.1007/11557432_51
[12]
Wilhelm Hasselbring. 2021. Benchmarking as Empirical Standard in Software Engineering Research. In EASE 2021: Evaluation and Assessment in Software Engineering, Trondheim, Norway, June 21-24, 2021, Ruzanna Chitchyan, Jingyue Li, Barbara Weber, and Tao Yue (Eds.). ACM Association for Computing Machinery, 365–372. https://doi.org/10.1145/3463274.3463361
[13]
Andreas Höfer and Walter F. Tichy. 2006. Status of Empirical Research in Software Engineering. In Empirical Software Engineering Issues. Critical Assessment and Future Directions, International Workshop, Dagstuhl Castle, Germany, June 26-30, 2006. Revised Papers(Lecture Notes in Computer Science, Vol. 4336), Victor R. Basili, H. Dieter Rombach, Kurt Schneider, Barbara A. Kitchenham, Dietmar Pfahl, and Richard W. Selby (Eds.). Springer, 10–19. https://doi.org/10.1007/978-3-540-71301-2_3
[14]
John P. A. Ioannidis, Daniele Fanelli, Debbie Drake Dunne, and Steven N. Goodman. 2015. Meta-research: Evaluation and Improvement of Research Methods and Practices. PLOS Biology 13, 10 (10 2015), 1–7. https://doi.org/10.1371/journal.pbio.1002264
[15]
Angelika Kaplan and Jan Keim. 2021. Towards an Automated Classification Approach for Software Engineering Research. In EASE 2021: Evaluation and Assessment in Software Engineering, Trondheim, Norway, June 21-24, 2021, Ruzanna Chitchyan, Jingyue Li, Barbara Weber, and Tao Yue (Eds.). ACM Association for Computing Machinery, 347–352. https://doi.org/10.1145/3463274.3463358
[16]
Angelika Kaplan, Thomas Kühn, Sebastian Hahner, Niko Benkler, Jan Keim, Dominik Fuchß, Sophie Corallo, and Robert Heinrich. 2022. Introducing an Evaluation Method for Taxonomies. In Evaluation and Assessment in Software Engineering (Göteborg, Sweden) (EASE 2022). ACM Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3530019.3535305
[17]
Angelika Kaplan, Maximilian Walter, and Robert Heinrich. 2021. A Classification for Managing Software Engineering Knowledge. In EASE 2021: Evaluation and Assessment in Software Engineering, Trondheim, Norway, June 21-24, 2021, Ruzanna Chitchyan, Jingyue Li, Barbara Weber, and Tao Yue (Eds.). ACM, 340–346. https://doi.org/10.1145/3463274.3463357
[18]
B. Kitchenham and S Charters. 2007. Guidelines for performing Systematic Literature Reviews in Software Engineering. https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.117.471
[19]
Marco Konersmann, Angelika Kaplan, Thomas Kühn, Robert Heinrich, Anne Koziolek, Ralf Reussner, Jan Jürjens, Mahmood al Doori, Nicolas Bolz, Marco Ehl, Dominik Fuchß, Katharina Großer, Sebastian Hahner, Jan Keim, Matthias Lohr, Timur Sağlam, Sophie Schulz, and Jan-Philipp Töberg. 2022. Evaluation Methods and Replicability of Software Architecture Research Objects. In 19th IEEE International Conference on Software Architecture, ICSA 2022, Honolulu - Hawaii (USA), March 12-15, 2022. IEEE.
[20]
Paul Ralph, Sebastian Baltes, Domenico Bianculli, Yvonne Dittrich, Michael Felderer, Robert Feldt, Antonio Filieri, Carlo Alberto Furia, Daniel Graziotin, Pinjia He, Rashina Hoda, Natalia Juristo, Barbara A. Kitchenham, Romain Robbes, Daniel Méndez, Jefferson Molleri, Diomidis Spinellis, Miroslaw Staron, Klaas-Jan Stol, Damian A. Tamburri, Marco Torchiano, Christoph Treude, Burak Turhan, and Sira Vegas. 2020. ACM SIGSOFT Empirical Standards. CoRR abs/2010.03525(2020). arXiv:2010.03525https://arxiv.org/abs/2010.03525
[21]
Payam Refaeilzadeh, Lei Tang, and Huan Liu. 2016. Cross-Validation. Springer New York, New York, NY, 1–7. https://doi.org/10.1007/978-1-4899-7993-3_565-2
[22]
Mary Shaw. 2003. Writing Good Software Engineering Research Paper. In Proceedings of the 25th International Conference on Software Engineering, May 3-10, 2003, Portland, Oregon, USA, Lori A. Clarke, Laurie Dillon, and Walter F. Tichy (Eds.). IEEE Computer Society, 726–737. https://doi.org/10.1109/ICSE.2003.1201262
[23]
Susan Elliott Sim, Steve M. Easterbrook, and Richard C. Holt. 2003. Using Benchmarking to Advance Research: A Challenge to Software Engineering. In Proceedings of the 25th International Conference on Software Engineering, May 3-10, 2003, Portland, Oregon, USA, Lori A. Clarke, Laurie Dillon, and Walter F. Tichy (Eds.). IEEE Computer Society, 74–83. https://doi.org/10.1109/ICSE.2003.1201189
[24]
Klaas-Jan Stol and Brian Fitzgerald. 2018. The ABC of Software Engineering Research. ACM Transactions on Software Engineering and Methodology (TOSEM) 27, 3(2018), 11:1–11:51. https://doi.org/10.1145/3241743
[25]
Muhammad Usman, Ricardo Britto, Jürgen Börstler, and Emilia Mendes. 2017. Taxonomies in software engineering: A Systematic mapping study and a revised taxonomy development method. Information and Software Technology 85 (2017), 43–59. https://doi.org/10.1016/j.infsof.2017.01.006
[26]
Bifan Wei, Jun Liu, Qinghua Zheng, Wei Zhang, Xiaoyu Fu, and Boqin Feng. 2013. A Survey of Faceted Search. J. Web Eng. 12, 1&2 (2013), 41–64. http://www.rintonpress.com/xjwe12/jwe-12-12/041-064.pdf
[27]
Roel J. Wieringa. 2014. Design Science Methodology for Information Systems and Software Engineering. Springer Berlin Heidelberg, Berlin, Heidelberg.
[28]
Claes Wohlin. 2014. Guidelines for Snowballing in Systematic Literature Studies and a Replication in Software Engineering. In Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering (London, England, United Kingdom) (EASE ’14). Association for Computing Machinery, New York, NY, USA, Article 38, 10 pages. https://doi.org/10.1145/2601248.2601268
[29]
Carmen Zannier, Grigori Melnik, and Frank Maurer. 2006. On the success of empirical studies in the international conference on software engineering. In 28th International Conference on Software Engineering (ICSE 2006), Shanghai, China, May 20-28, 2006, Leon J. Osterweil, H. Dieter Rombach, and Mary Lou Soffa (Eds.). ACM Association for Computing Machinery, 341–350. https://doi.org/10.1145/1134285.1134333

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
EASE '22: Proceedings of the 26th International Conference on Evaluation and Assessment in Software Engineering
June 2022
466 pages
ISBN:9781450396134
DOI:10.1145/3530019
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 June 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. empirical software engineering research
  2. knowledge management system
  3. machine learning
  4. meta-research in software engineering
  5. natural language processing

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

EASE 2022

Acceptance Rates

Overall Acceptance Rate 71 of 232 submissions, 31%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 149
    Total Downloads
  • Downloads (Last 12 months)27
  • Downloads (Last 6 weeks)2
Reflects downloads up to 17 Feb 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media