Context checklist for industrial software engineering research and practice

https://doi.org/10.1016/j.csi.2021.103541Get rights and content

Highlights

  • Presentation of a systematically developed context checklist.

  • Supports researchers in collecting relevant context information during case studies.

  • The context checklist aids researchers during secondary studies.

  • The checklist was empirically evaluated and refined based on the evaluation.

  • Both researchers and practitioners highlighted benefits of the checklist.

Abstract

The relevance of context is particularly stressed in case studies, where it is said that “case study is an empirical method aimed at investigating contemporary phenomena in their context”. In this research, we classify context information and provide a context checklist for industrial software engineering. The checklist serves the purpose of (a) supporting researchers and practitioners in characterizing the context in which they are working; (b) supporting researchers with a checklist to identify relevant contextual information to elicit and report during primary and secondary studies. We utilized a systematic approach for constructing the classification of context information and provided a detailed definition for each item. We collected feedback from researchers as well as practitioners. The usefulness of the checklist was perceived more positively by researchers than practitioners, though they highlighted benefits (raising awareness of the importance of context and usefulness for management). The understandability was perceived positively by both practitioners and researchers. The checklist may serve as a “meta-model”, forming the basis for specific adaptations for different research areas, and as input for researchers deciding which context information to extract in systematic reviews. The checklist may also help researchers in reporting context in research papers.

Introduction

Context provides the ramification of the studied phenomenon. For example, when conducting an industrial study, the study is carried out in the context of a company and may comprise the company size, the development practices used, the people’s experience, among others (see e.g., [1]).

Briand et al. [2] argue that context-driven research is needed in general. Which solutions are suitable for practical problems depend on contextual elements. Practical examples for human, organizational, and domain-related contextual elements are presented in their work.

The documentation of context may serve multiple purposes for software engineering practitioners.

Purpose 1 (industrial): The experience factory [3] proposes to classify projects based on their characteristics and use that to identify relevant project cases to learn from for a newly developed project. Basili et al. [3] state multiple relevant characteristics, such as: application domain, experience (problem, process), number of people in a project, programming language, and product factors such as system size. A context checklist may thus support the recording of experiences in projects to realize the experience factory in industrial settings.

Purpose 2 (industrial): In decision making, past decisions (for example related to architectural decisions) may be used as input to future architectural decisions. Thereby, the context of the decision is highly relevant. For example, whether an organization works with agile or plan-driven approaches impacts how a system is architected; in particular, the design of an up-front architecture is considered too difficult and expensive in an agile context [4]. Hence, context checklists and classifications may help to decide which context information should be considered when making decisions, either when using input from past decisions or experiences and knowledge in general.

Purpose 3 (academic): Context checklists also support the synthesis of literature reviews and industrial studies [1], [5]. They help researchers in deciding what type of contextual information to report in their primary studies (e.g., by narrative descriptions plus tagging) and what to extract in secondary studies.

Though many possible context elements may be described, no comprehensive checklist or classification for software engineering exists today, and none of the previously proposed checklists and classifications were empirically evaluated.

This paper aims to expand and complement existing context classifications such as [1], [6], [7] and our previous work on documenting decisions about the selection of software components  [8], [9], to create a comprehensive and structured checklist for relevant context information in software engineering. Our checklist expands on previous checklists by providing detailed descriptions, value domain types, and values for the context information, thus disambiguate the meaning of high-level checklists such as the one proposed by Petersen and Wohlin [1]. Thereby, it is vital that practitioners identify and describe their own context, for example, to achieve the purposes 1) and 2) stated above. Furthermore, researchers need to be able to decide what context to report in their studies, and what to look for when extracting context in secondary studies for the synthesis of evidence (purpose 3).

Thus, in our work, we carry out evaluations with the following perspectives:

  • E1: Gather the feedback of practitioners after classifying their context using the checklist. We capture the practitioners’ concerns, which provides an evaluation of the deficiencies of the checklist (e.g., with respect to understandability). The understandability is a prerequisite for the practitioners to use the checklist (e.g., for the purposes stated above)

  • E2: Gather the feedback of researchers from extracting the context of published case studies. We capture the consistency of the checklist and qualitative feedback to revise the checklist to improve its understandability.

E1 provides input towards achieving purposes 1) and 2) above, and E2 provides input towards achieving purpose 3).

In order to create the checklist, we utilized an iterative approach inspired by Bayona-Oré [10] to systematically identify and structure context information. The approach highlights the identification of the problem to be solved, identifying and extracting information for the classification, and design and construction. Particular emphasis is put on controlling and improving the terminology of the classification. We used questionnaires to evaluate the classification towards practitioners (E1) and researchers (E2).

The remainder of the paper is structured as follows: Section 2 presents the background and related work. Section 3 describes the process of constructing the initial version of the classification, inspired by Bayona-Oré [10]. Section 4 describes the final version of the classification based on the construction process proposed by Bayona-Oré [10], also incorporating the lessons learned during the evaluation, which are described in Section 5. Section 6 discusses the implications for researchers and practitioners. Section 7 concludes the paper.

Section snippets

Related work

Ghaisas et al. [11] provide a proposal for reasoning on how to generalize findings across cases, arguing that the ability to predict a research outcome may depend on the similarity of the case. A three-step process has been proposed:

  • 1.

    Describe and characterize the context elements of past cases and also describe their relationship.

  • 2.

    Reason on the relation between the context elements and the observations made in the study.

  • 3.

    Compare the findings of a new case to the previous cases and reason on the

Classification construction process

The construction of the context classification, which is to be used as a checklist, was inspired by the approach proposed by Bayona-Oré [10]. In the following, we describe each step of the construction comprising of the following steps. The last step is presented in more detail in Section 5.

  • 1.

    Planning: Define software engineering knowledge areas based on The Software Engineering Body of Knowledge (SWEBOK) [16]; define the objectives of the classification; define the subject matter; select a

The context checklist

Fig. 1 depicts the result of the process, a context checklist with three hierarchy levels. In the center of the checklist is the object of study or the phenomena being investigated. At the top level, a narrative description of the context is provided. The narrative description summarizes the main contextual aspects but can also, for example, explain relations between contextual elements that could not be captured easily through checklist, such as the connection between time pressure and the

Method

We first describe the method for the evaluations from the practitioner and researcher point of view. Thereafter, the results are presented.

Discussion

We first reflect on the usefulness for researchers and practitioners, followed by the checklist’s understandability, which impacts its usability.

Conclusion

The paper presents a context checklist based on classifying software engineering context. The classification comprises the following high-level categories: organization, product, stakeholder, development method and technology, and business and market. Each category comprises of a set of context items, which have a value domain and concrete values.

The initial version of the checklist was constructed following a systematic process comprising the steps planning (defining the objectives of the

CRediT authorship contribution statement

Kai Petersen: Conceptualization, Data curation, Methodology, Writing - original draft, Writing - review & editing. Jan Carlson: Conceptualization, Data curation, Methodology, Writing - original draft, Writing - review & editing. Efi Papatheocharous: Conceptualization, Data curation, Methodology, Writing - original draft, Writing - review & editing. Krzysztof Wnuk: Conceptualization, Data curation, Methodology, Writing - original draft, Writing - review & editing.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References (34)

  • P. Efi et al.

    The GRADE decision canvas for classification and reflection on architecture decisions

    Proceedings of the 12th International Conference on Evaluation of Novel Approaches to Software Engineering - Volume 1: ENASE,

    (2017)
  • S. Bayona-Oré et al.

    Critical success factors taxonomy for software process deployment

    Software Quality Journal

    (2014)
  • S. Ghaisas et al.

    Generalizing by similarity: Lessons learnt from industrial case studies

    1st International Workshop on Conducting Empirical Studies in Industry, CESI 2013

    (2013)
  • T. Dybå et al.

    What works for whom, where, when, and why?: on the role of context in empirical software engineering

    2012 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM

    (2012)
  • T. Dybå

    Contextualizing empirical evidence

    IEEE Software

    (2013)
  • D. Kirk et al.

    Investigating a conceptual construct for software context

    18th International Conference on Evaluation and Assessment in Software Engineering, EASE ’14, London, England, United Kingdom, May 13–14, 2014

    (2014)
  • B. Cartaxo et al.

    Mechanisms to characterize context of empirical studies in software engineering

    Experimental Software Engineering Latin American Workshop (ESELAW 2015)

    (2015)
  • Cited by (4)

    • Exploring Task Equivalence for Software Engineering Practice Adaptation and Replacement

      2022, Onward! 2022 - Proceedings of the 2022 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software, co-located with SPLASH 2022
    • Contextual Factors Affecting Software Development Practice Efficacy: A Practitioners’ Perspective

      2022, International Conference on Evaluation of Novel Approaches to Software Engineering, ENASE - Proceedings
    View full text