Context checklist for industrial software engineering research and practice
Introduction
Context provides the ramification of the studied phenomenon. For example, when conducting an industrial study, the study is carried out in the context of a company and may comprise the company size, the development practices used, the people’s experience, among others (see e.g., [1]).
Briand et al. [2] argue that context-driven research is needed in general. Which solutions are suitable for practical problems depend on contextual elements. Practical examples for human, organizational, and domain-related contextual elements are presented in their work.
The documentation of context may serve multiple purposes for software engineering practitioners.
Purpose 1 (industrial): The experience factory [3] proposes to classify projects based on their characteristics and use that to identify relevant project cases to learn from for a newly developed project. Basili et al. [3] state multiple relevant characteristics, such as: application domain, experience (problem, process), number of people in a project, programming language, and product factors such as system size. A context checklist may thus support the recording of experiences in projects to realize the experience factory in industrial settings.
Purpose 2 (industrial): In decision making, past decisions (for example related to architectural decisions) may be used as input to future architectural decisions. Thereby, the context of the decision is highly relevant. For example, whether an organization works with agile or plan-driven approaches impacts how a system is architected; in particular, the design of an up-front architecture is considered too difficult and expensive in an agile context [4]. Hence, context checklists and classifications may help to decide which context information should be considered when making decisions, either when using input from past decisions or experiences and knowledge in general.
Purpose 3 (academic): Context checklists also support the synthesis of literature reviews and industrial studies [1], [5]. They help researchers in deciding what type of contextual information to report in their primary studies (e.g., by narrative descriptions plus tagging) and what to extract in secondary studies.
Though many possible context elements may be described, no comprehensive checklist or classification for software engineering exists today, and none of the previously proposed checklists and classifications were empirically evaluated.
This paper aims to expand and complement existing context classifications such as [1], [6], [7] and our previous work on documenting decisions about the selection of software components [8], [9], to create a comprehensive and structured checklist for relevant context information in software engineering. Our checklist expands on previous checklists by providing detailed descriptions, value domain types, and values for the context information, thus disambiguate the meaning of high-level checklists such as the one proposed by Petersen and Wohlin [1]. Thereby, it is vital that practitioners identify and describe their own context, for example, to achieve the purposes 1) and 2) stated above. Furthermore, researchers need to be able to decide what context to report in their studies, and what to look for when extracting context in secondary studies for the synthesis of evidence (purpose 3).
Thus, in our work, we carry out evaluations with the following perspectives:
- •
E1: Gather the feedback of practitioners after classifying their context using the checklist. We capture the practitioners’ concerns, which provides an evaluation of the deficiencies of the checklist (e.g., with respect to understandability). The understandability is a prerequisite for the practitioners to use the checklist (e.g., for the purposes stated above)
- •
E2: Gather the feedback of researchers from extracting the context of published case studies. We capture the consistency of the checklist and qualitative feedback to revise the checklist to improve its understandability.
E1 provides input towards achieving purposes 1) and 2) above, and E2 provides input towards achieving purpose 3).
In order to create the checklist, we utilized an iterative approach inspired by Bayona-Oré [10] to systematically identify and structure context information. The approach highlights the identification of the problem to be solved, identifying and extracting information for the classification, and design and construction. Particular emphasis is put on controlling and improving the terminology of the classification. We used questionnaires to evaluate the classification towards practitioners (E1) and researchers (E2).
The remainder of the paper is structured as follows: Section 2 presents the background and related work. Section 3 describes the process of constructing the initial version of the classification, inspired by Bayona-Oré [10]. Section 4 describes the final version of the classification based on the construction process proposed by Bayona-Oré [10], also incorporating the lessons learned during the evaluation, which are described in Section 5. Section 6 discusses the implications for researchers and practitioners. Section 7 concludes the paper.
Section snippets
Related work
Ghaisas et al. [11] provide a proposal for reasoning on how to generalize findings across cases, arguing that the ability to predict a research outcome may depend on the similarity of the case. A three-step process has been proposed:
- 1.
Describe and characterize the context elements of past cases and also describe their relationship.
- 2.
Reason on the relation between the context elements and the observations made in the study.
- 3.
Compare the findings of a new case to the previous cases and reason on the
Classification construction process
The construction of the context classification, which is to be used as a checklist, was inspired by the approach proposed by Bayona-Oré [10]. In the following, we describe each step of the construction comprising of the following steps. The last step is presented in more detail in Section 5.
- 1.
Planning: Define software engineering knowledge areas based on The Software Engineering Body of Knowledge (SWEBOK) [16]; define the objectives of the classification; define the subject matter; select a
The context checklist
Fig. 1 depicts the result of the process, a context checklist with three hierarchy levels. In the center of the checklist is the object of study or the phenomena being investigated. At the top level, a narrative description of the context is provided. The narrative description summarizes the main contextual aspects but can also, for example, explain relations between contextual elements that could not be captured easily through checklist, such as the connection between time pressure and the
Method
We first describe the method for the evaluations from the practitioner and researcher point of view. Thereafter, the results are presented.
Discussion
We first reflect on the usefulness for researchers and practitioners, followed by the checklist’s understandability, which impacts its usability.
Conclusion
The paper presents a context checklist based on classifying software engineering context. The classification comprises the following high-level categories: organization, product, stakeholder, development method and technology, and business and market. Each category comprises of a set of context items, which have a value domain and concrete values.
The initial version of the checklist was constructed following a systematic process comprising the steps planning (defining the objectives of the
CRediT authorship contribution statement
Kai Petersen: Conceptualization, Data curation, Methodology, Writing - original draft, Writing - review & editing. Jan Carlson: Conceptualization, Data curation, Methodology, Writing - original draft, Writing - review & editing. Efi Papatheocharous: Conceptualization, Data curation, Methodology, Writing - original draft, Writing - review & editing. Krzysztof Wnuk: Conceptualization, Data curation, Methodology, Writing - original draft, Writing - review & editing.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
References (34)
- et al.
The situational factors that affect the software development process: towards a comprehensive reference framework
Inf. Softw. Technol.
(2012) - et al.
The GRADE taxonomy for supporting decision-making of asset selection in software-intensive system development
Inf. Softw. Technol.
(2018) Cognitive skill acquisition
Annu. Rev. Psychol.
(1996)- et al.
Evaluating strategies for study selection in systematic literature studies
2014 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM ’14, Torino, Italy, September 18–19, 2014
(2014) - et al.
Context in industrial software engineering research
Proceedings of the 2009 3rd International Symposium on Empirical Software Engineering and Measurement
(2009) - et al.
The case for context-driven software engineering research: generalizability is overrated
IEEE Software
(2017) - et al.
Experience factory
Encyclopedia of software engineering
(1994) - et al.
Software architecture-centric methods and agile development
IEEE Software
(2006) Understanding and validity in qualitative research
Harv. Educ. Rev.
(1992)- et al.
A context model for architectural decision support
2016 1st International Workshop on Decision Making in Software ARCHitecture (MARCH)
(2016)
The GRADE decision canvas for classification and reflection on architecture decisions
Proceedings of the 12th International Conference on Evaluation of Novel Approaches to Software Engineering - Volume 1: ENASE,
Critical success factors taxonomy for software process deployment
Software Quality Journal
Generalizing by similarity: Lessons learnt from industrial case studies
1st International Workshop on Conducting Empirical Studies in Industry, CESI 2013
What works for whom, where, when, and why?: on the role of context in empirical software engineering
2012 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM
Contextualizing empirical evidence
IEEE Software
Investigating a conceptual construct for software context
18th International Conference on Evaluation and Assessment in Software Engineering, EASE ’14, London, England, United Kingdom, May 13–14, 2014
Mechanisms to characterize context of empirical studies in software engineering
Experimental Software Engineering Latin American Workshop (ESELAW 2015)
Cited by (4)
Checklists to support decision-making in regression testing
2023, Journal of Systems and SoftwareExploring Task Equivalence for Software Engineering Practice Adaptation and Replacement
2022, Onward! 2022 - Proceedings of the 2022 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software, co-located with SPLASH 2022Contextual Factors Affecting Software Development Practice Efficacy: A Practitioners’ Perspective
2022, International Conference on Evaluation of Novel Approaches to Software Engineering, ENASE - Proceedings