1 Introduction

As a response to an increasingly dynamic world (new and uncertain contexts, complexity, competition, updatability, and user feedback) software is developed like an evolutionary system, e.g. in a DevOps approach (cf. [1] or NetflixFootnote 1). During the diffusion process of software innovations, the users start to change and evolve the initial idea of the software and re-invent it [2]. This makes it necessary to frequently observe the users for changed behavior and requirements. Furthermore, the focus of software becomes more and more experts/highly skilled users that shall be supported in their work [3]. They possess specialized knowledge and experience-based knowledge that is indispensable for a successful software development, but usually this knowledge is only activated if they are in their specific work context (cf. [4]). Hence, the question arises how we can get aware of changes and new ideas as well as challenges in such contexts, where we must update our understanding continuously.

We tackle this question in one of our projects called ZenMEMFootnote 2 where we are developing software tools supporting musicologists in their work. In an initial project phase, it has become clear that besides a phase-oriented requirements elicitation and evaluation, we needed means of getting continuously feedback on our results as well as the vision. Unfortunately, traditional approaches to user participation like lead users working closely with the developers are not feasible for scarce resources on both sides (users and developers) and the spatial distance.

Hence, the idea arose that we require additional structures for feedback and wishes of the users. It should be made possible for them to asynchronously give us feedback on insights they gained during their work, problems they encounter, and visions they have for further developments. Our users are not developers or requirements engineers and solutions such as feature requests via e-mail or ticketing systems like Jira only take unstructured information. Therefore, we inevitable run into problems such as incomplete or hidden requirements and stakeholders with difficulties in separating requirements from known solution design [5].

Our solution for this challenge is to empower the users to participate independently and asynchronously in the requirements elicitation process via an assisting system that helps them to structure their thoughts so that they can straightaway suggest and describe experienced problems as well as new ideas and changes. The main contribution of this paper is to discuss and evaluate the possibility of such a tool-guided elicitation process. For that we present

  • the details of the idea of a tool guided elicitation process (cf. Sect. 3),

  • a classification of requirements elicitation techniques regarding their usability and usefulness to locate and scope problems and be applied by users independently from requirements engineers (cf. Sect. 4),

  • how we integrated selected elicitation techniques in our software prototype called Vision Backlog as a proof of concept that such a tool is technical feasible (cf. Sect. 5),

  • and the results of an initial study regarding the usability of the tool as well as the usefulness of the results (cf. Sect. 6).

2 ZenMEM: Context and Problem of Our Use Case

The project ZenMEM contains all the mentioned challenges addressed in the introduction. There is a team of three developers with no one dedicated solely to do usability and requirements engineering. The field of digital music editions is quite novel, and the de-facto standard is still paper-based music editions. Hence, everything that is done equates to an innovation if not revolution for the potential user base with approximately 500 users worldwide. A 30-min car drive away are based a handful musicologists that we are eager to work more closely with the developers. They are very limited due to a strict schedule within their other projects. Another 50 musicologists are scattered all around in an eight-hour car drive radius.

Because of their current work style, they have many routines that are not described formally but are based on their experiences and become topic if they actively need it. For that reason, we regularly miss important requirements. On top of that, the tool support we introduce, represent emerging practices that still must proof to be good or best practices in the field of digital music editions. Hence, the work style and requirements will change over the time and the regular feedback from the users is needed without interfering too much in their work and having them on-site.

There are different challenges for the requirements elicitation process. Naive stakeholders have very little knowledge of the elicitation process, so either they must learn it or must keep depending on the requirements and usability engineer. Both solutions are not feasible considering the stringent time constraints.

Elicitation requires specific skills. Among all the stakeholders, only requirements engineers are familiar with those [6]. If stakeholders must depend on this role for elicitation all the time, the requirements engineer can become a bottleneck. Furthermore, in small organizations as well as in this project the requirements engineer is a relatively busy resource.

Typically, naive stakeholders focus more on the suspected solution rather than the actual problem they are facing. Often, they are not even aware of the actual problem. But identifying the core of the stakeholder’s problem is an almost necessary step to quality requirements [7].

Additionally, notes, lists, sketches which are mostly used to record needs or expectations of stakeholders are not efficient ways, as they cannot naturally be tied to actual requirements [6]. Formal meetings are not feasible enough to thoroughly extract stakeholder’s needs, expectations, etc. [6]. Stakeholders can get insights about their needs literally any time, especially when they are working on it. It could be hard during formal meetings to point out or recollect specific things [7,8,9].

3 Towards a Tool-Guided Elicitation Process

Requirements elicitation is described as learning, uncovering, extracting, surfacing, or discovering needs of customers, users, and other potential stakeholders by Hickey and Davis [10]. In the requirements engineering process, elicitation is one activity besides analysis, triage, specification, and verification. Although most existing models show these activities in an ordered sequence, Hickey and Davis state that in reality, requirements activities are not performed sequentially, but iteratively and in parallel. This is an important insight for a tool-guided elicitation process, as it emphasizes that elicitation will never be unidirectional and must be conducted continuously.

Hence, a tool-guided elicitation process for users will never be a complete substitute for user researcher and requirements engineers as analysts. Instead, the activity of requirements elicitation will be shared among stakeholders and analysts with feedback mechanism between these two groups. A tool-guided process would not replace requirements analysts, but it will ease their work by reducing efforts and time spent on eliciting correct and complete requirements. Because of our use case, a tool-guided process must support and connect both stakeholders and analysts.

Just as much, the tool must support a learning process over the time like it is normal in approaches as Design Thinking, DevOps and Lean UX. This is especially reasonable for the context of digital music editions (cf. Sect. 2) as it is distinguished by innovations and complex as well as chaotic problem domains. Innovations are not adopted immediately by everyone at once but need time to prove their advantages. Usually they are adapted during the adoption process in a way that was not intended by the originators [2]. Complex and chaotic problem domains need to be handled with a strategy that involves probing/acting as starting point to sense the effects and react accordingly to it [11]. Thus, we are safe to assume that the initial requests made by the tool need to be adapted and refined over the time.

To sum up, a tool-guided requirements elicitation process should capture the stakeholder’s exact needs and expectations in the form of goals with associated context and always be available so that stakeholders can benefit from it at any time. The captured knowledge/information through this system shall help requirements engineers to extract tacit knowledge and to formulate more accurate and complete requirements. All the stakeholders shall actively participate in understanding the underlying problem to collectively reach an appropriate solution.

Besides the tool approach and its potential users, it must be defined which techniques should be incorporated. According to Maiden [12], the primary function of requirements work is to locate and scope problems, then create and describe solutions. Hall [40] once stated that the first rule of user research is to never ask anyone what they want. Nielsen [41] goes into the same direction and states that to design the best UX, pay attention to what users do, not what they say. Adding the ‘I can’t tell you what I want, but I’ll know it when I see it’ [13] dilemma, it gets obvious that for user participation we should start with techniques that focus on locating and scoping problems instead of on creating and describing solutions. In the following, possible elicitation techniques are classified for a potential use in a tool-guided elicitation process.

4 Classification of Elicitation Techniques

Based on a literature research, we gathered requirements elicitation techniques from different disciplines like social science, design, usability engineering and requirements engineering (cf. Table 1 for the gathered techniques). These techniques need to be classified in accordance to their usefulness for our goals.

Table 1. Classification criterion for elicitation techniques

This classification can be done in many ways. One way is to classify them according to the means of communication they involve: conversational, observational, analytic and synthetic [14]. The conversational method is based on verbal communication between two or more people. Methods in this category are called verbal methods. The best example is interviews. The observational method is based on understanding problem domain by observing human activities. There are requirements which people cannot verbally articulate properly. Those are acquired through observational methods, for example protocol analysis. Analytic methods provide ways to explore the existing documentation of the product or knowledge and acquire requirements from a series of deductions which help analysis capture information about application domain, workflow and product features. Examples include card sorting. Synthetic methods systematically combine conversational, observational and analytical methods into a single method. They provide models to explore product features and interaction possibilities. An example is prototyping with storyboards.

Although the techniques mentioned above make sense, these schemes are not much of a help considering our goals. The primary concern is that the stakeholders should focus on the problems they are facing and should not get distracted by solution or implementation details. Another challenge is that a technique should be representable in a software. Examining the literature that describes these techniques [15, 16], following classification criteria are established:

  • Locating and scoping problems:

    Is the technique intended to locate and scope problem and not solution oriented?

  • Suitable for autarkic execution:

    Can techniques be used individually, and it is not performed as a group activity?

  • Practicable for both stakeholder and analyst:

    Can the technique be used by both stakeholder and analysts?

  • Representable in software:

    Can the technique be imitated as a software?

Direct answers to locating and scoping problems and suitable for autarkic execution can be found in the above-mentioned literature. Considering how much in-depth knowledge is required to use a specific elicitation technique, whether it can be used by stakeholders is indicated by practicable for both stakeholder and analyst, keeping in mind that stakeholders must not learn anything new. Representable in software classifies techniques as per their ability to be imitated as software. There already exist software which implement certain techniques [17,18,19,20]. Other techniques are included in our prototype and the usage is described in Sect. 5. Techniques which fulfill a specific criterion are marked in Table 1.

5 Vision Backlog – A Prototype for a Tool-Guided Elicitation Process

Based on this classification, the elicitation techniques listed in Table 2 have been selected to be implemented in our prototype that we call Vision Backlog. The selected elicitation techniques intend to gather diverse information at different stages of the elicitation process. These techniques are practiced in different formats with different surrounding environment settings.

Table 2. Selected elicitation techniques and their purpose in Vision Backlog

To be usable in Vision Backlog, we analyzed these techniques for common factors and found one in questioning. Based on this insight we created in total 34 questions corresponding to the purpose of the different selected techniques. The questions are intended to ask information about stakeholder goals, activities, aptitudes, attitudes, and skills. These questions have been integrated in the stakeholder view of Vision Backlog.

The stakeholder view is intended for stakeholders (like users) to be able to enter data about the task they perform, and more contextual data like why they perform this task, what tools or knowledge they require to perform it, and if there are any alternatives to this task. Along with these information, they provide personal information like education they have had, job designation, and skills they possess. That is why the stakeholder view mainly separates in the two areas tasks and user profile.

The user profile area (cf. Fig. 1) is presented on the very first login with a help pop-up that explains the purpose of the application and the different functionalities including their importance. In this area the stakeholders create a personal profile. The questions we ask are based on Cooper’s suggestions for Personas [19] and generalized categorization for the adopter categories on how innovations are adopted by different groups of people from Rogers [2].

Fig. 1.
figure 1

User profile creation

In the task area (cf. Fig. 2), the stakeholder can create tasks as they perform them as part of their job. They can provide details about how frequently and how important the task is, the reason behind performing this task, and any improvements he can think of. This area is divided into four steps Quick intro, Think about it, Supporting details, and Finishing details. Within each step a description is given what this step is about. It is also possible to suspend doing the steps and continue them afterwards.

Fig. 2.
figure 2

Vision Backlog – stakeholder view: task creation

Within the step Quick intro the user shall give an overview of the task, he wants to tell the analyst more about. The questions in this step are about a short description of the task, frequency, importance, his role, and an overview of the steps necessary in this task. In the next step Think about it a short rationale for this task shall be given before getting to the Supporting details step. In this step information regarding the context and the impacts of this task is given. In the last step Finishing details additional information about other involved people are given as well as possible factors for improvements.

Besides the stakeholder view with the user profile and task area, we implemented an analyst view (cf. Fig. 3). This view is dedicated to the usability/requirements engineer to explore and discover inputs from the stakeholders. It has various filter and sorting options in this view. Additionally, it can structure the task descriptions from the stakeholders into features for the further development. Besides these functionalities for sorting the created task descriptions, this view also includes a dashboard presenting the users with their attributes for creating personas as well as an overview of the system.

Fig. 3.
figure 3

Vision Backlog – analyst view: task list screen

6 Evaluation

We evaluated Vision Backlog with an initial two levels study consisting of a usability test and an expert review on the content quality. These two evaluations have been conducted non-consecutively. The entered data from the usability tests have been used for the expert reviews on the content quality.

The goal of the usability test was to show that stakeholders can understand the tasks presented in Vision Backlog and are able to use the tool for the selected elicitation techniques. We have been using the single evaluation from AttrakDiff [39] to survey the usability and attractiveness of our tool with a total of five participants. The usability test itself was done remotely and asynchronously. The summed-up results of the survey are presented in Fig. 4. The meaning of the terms used in the diagram are as followed:

Fig. 4.
figure 4

Usability evaluation results

  • Pragmatic Quality (PQ) indicates how successful users are in achieving their goals with the product,

  • Hedonic Quality-stimulation (HQ-S) indicates to what extent the product supports human needs of developing something new in terms of novel, interesting and stimulating functions, contents, interaction and presentation styles,

  • Hedonic Quality-identity (HQ-I) indicates to what extent the product allows user to identify with it and,

  • Attractiveness (ATT) is the global value of the product to which Hedonic and pragmatic qualities contribute equally

The results indicate that Vision Backlog is rated positive, but the confidence level for PQ (1,62) and HQ (0,95) are quite huge, meaning that the participants evaluated the application differently. Overall, the participants have been successful in achieving their goals and rated the application attractive, but there is still room for improvement.

We had three participants for the expert review on the content quality. All participants had a background in computer science and experiences in requirements elicitation. They used the analyst’s view and had to answer five questions afterwards in an online form with a four-point scale for the answers (cf. Table 3). Overall, the participants rated that the utilized techniques have been explained properly and how they are related to requirements elicitation. They strongly agreed that the analytics provided by Vision Backlog were helpful but showed a differentiated picture regarding the representation of the analytics.

Table 3. Usefulness evaluation results

7 Conclusion and Outlook

In this paper we presented the idea of a tool-guided elicitation process that empowers users and other stakeholders to actively participate in the requirements elicitation process without being dependent on a usability or requirements engineer. To implement this idea, we created a classification for requirements elicitation techniques and realized it with a prototype called Vision Backlog, which is also presented in this paper. In a first study, Vision Backlog was evaluated regarding its usability and usefulness.

Vision Backlog concentrated on requirements elicitation in general without focusing on any specific software development methodology. As the concept Vision Backlog itself is agile in nature i.e. it encourages stakeholders to iterate over their own vision repeatedly to refine it, it would be interesting to figure out and demonstrate how such an approach can be extended and integrated into an Agile methodology (e.g. into Scrum). By doing this, stakeholder needs can be managed and monitored since their inception until they are developed, typically in a span of a sprint. This would help to measure stakeholder involvement into elicitation activities and give feedback on how much rework is avoided in the presence of Vision Backlog. The same approach can be used as well for reducing technical debt. The general idea is depicted in Fig. 5.

Fig. 5.
figure 5

Future work

Vision Backlog would serve as a starting point which will be used to build a broader product vision and through which potential tasks concerning a specific product feature can be extracted. Such tasks could potentially correspond to tasks in tools like Jira. This should form an initial product or even smaller sprint backlog. Once the development work is over for a sprint, a retrospection provides feedback to Vision Backlog so that stakeholders or end users can refine their vision if the developed solution do not match sufficient enough with their ideas. Ideally, rework costs while practicing scrum without Vision Backlog should be higher than when Vision Backlog is integrated into scrum.