Keywords

1 Introduction

Implementing unique and novel software applications contains the great challenge that designers can only build on existing knowledge to a limited extent. This limits the designer in accurately predicting what delivers value for the users. To illustrate how difficult it is to make accurate predictions, have a look at the following real–world example.

The default experience for the Netflix Frontpage (see Fig. 1a) is a simple page with a Sign In–Button and a Start Your Free Month–Button offering three information: Their basic offering, the costs, and their promise to be able to use it everywhere.

Fig. 1.
figure 1

Example from Using A/B Testing To Inform Your Designs by Netflix [3].

Our intuition tells us that we can convince more people to sign up and to use the service if we give them more information. This is the way Netflix thought [3]. Thus, they implemented a prototype where users could browse the library without logging in (cf. Fig. 1b). They tested it against the default experience (cf. Fig. 1a) and were surprised that the default experience still has a better conversion rate.

Their first guess was that their intuition is right, but their implementation was not good enough. They implemented four more prototypes that they tested against the default experience. The other four prototypes were inferior to it as well. However, Netflix learnt during their tests why their default experience is better than their supposed improvement (cf. Blaylock and Iyenga [3]).

Although it was only a moderate change, Netflix was unable to predict that it would deteriorate. This in turn coincides with the experience others have had with unique and novel ideas. Kohavi et al. [20] give an overview of the figures of some companies such as Microsoft, Netflix or Web Analytics. Between 66% and 90% of their implemented ideas fail to show value. This means that in most cases experts have suggested to implement a certain feature but had been wrong in estimating its value.

Current development approaches like agile software development (e.g. scrum) or human–centered design are only of limited help in this challenge if used on their own. As Norman and Verganti [30] argue, these approaches only fit incremental innovations as they optimize along a known solution path. But they do not try to understand the problem and find more suitable solution paths. Approaches like Design Thinking with its diverging and converging thinking can help to understand underlying problems and suitable solutions (cf. Plattner et al. [32]). Hence, Design Thinking can help to better estimate the value. However, it is unclear how it can be applied successfully in software development [23].

Lindberg et al. [23] see two ways in which Design Thinking can be applied to software development. On the one hand as Front–End Technique and on the other hand as Integrated Development Philosophy. In case of the Front–End Technique, Design Thinking would be placed as a phase prior to the development process. The output of the Design Thinking phase would be a single solution that would then be implemented as software. Design Thinking as Integrated Development Philosophy is implemented as a one–team approach. This means that all core members (e.g. software developer, designer, lead user) are involved throughout the development process.

In this paper we are proposing a mixture of Front–End Technique and Integrated Development Philosophy to take creativity of the team and the underlying challenge of the Netflix example (cf. Sect. 2 for further details) into account. Hence, to take a user centric approach and to be sure that we deliver the value also in the actual context of use, we describe a software development approach with the following key aspects:

  • A special role called Value Designer that ensures that the solution is aligned with the problem and that the knowledge of the Design Thinking stage is taken into the software development stage (cf. Sect. 4.6).

  • Design Thinking as Front–End Technique that ends before there is only one possible solution left (cf. Sect. 4.2).

  • A kind of continuation of the Design Thinking stage into software development through the simultaneous development of at least two solution paths and validation of these through field experiments (cf. Sect. 4.4).

  • An intermediate stage to prepare the results of the Design Thinking stage for the Software Development stage (cf. Sect. 4.3).

Although design thinking is a methodology that does not limit the medium with which prototypes are produced, software as a prototype medium, especially used in the actual context of use, has special features (cf. Sect. 2) that need to be considered.

2 Challenges of Developing Innovative Software Applications

As already mentioned in the introduction, current approaches only fit incremental innovations. In this paper, the focus is on radical innovations. But what is it and how do they distinguish themselves?

To define it, we look at the technological radicalness that is characterized by Dahlin and Behrens [7] with three criteria:

  • Criterion 1. The invention must be novel: it needs to be dissimilar from prior inventions.

  • Criterion 2. The invention must be unique: it needs to be dissimilar from current inventions.

  • Criterion 3. The invention must be adopted: it needs to influence the content of future inventions.

As can be seen in Criterion 3, the term successful radical invention implies that something must be adopted. But it can only be ex post determined whether something has been adopted and influenced the content of future inventions. This makes this criterion inapplicable in a development approach.

Criterion 1 is a comparison with the past and Criterion 2 with the present. If both criterions apply, we have created a radical invention that potentially can become a successful radical invention. If all three criteria have been fulfilled an invention can be considered as a successful radical invention.

We will continue to use the definition of a successful radical invention as a synonym for a radical innovation. Since Criterion 3 does not allow us to guarantee that the development process will lead to radical innovation, we limit ourselves to the first two criteria. Hence, we are not talking about radical innovations in this paper, but unique and novel software applications.

If our software application is unique and novel, which is a prerequisite for radical innovations, we can neither rely on data from the past nor the present. Hence, constraints and interacting dependencies must be uncovered instead of analyzing or categorizing observations. This in turn is only possible by probing or acting and observe the effects, which is what Netflix did as well in the Sign–Up example, described in the introduction.

Decision Making

The need for such a different approach is outlined by Kurtz and Snowden [21] in the Cynefin framework, which is developed to address “[...]the dynamics of situations, decisions, perspectives, conflicts, and changes in order to come to a consensus for decision–making under uncertainty”. Instead of following a one size fits all approach [24], it’s advocating different decision–making approaches according to the context/domain. Very briefly, in Cynefin we have clockwise (see Fig. 2) the domains Chaotic, Complex, Complicated, and Obvious. In the Chaotic domain, we know the least about the context and its constraints including interacting dependencies. The more we get to the Obvious domain, the more we know about the constraints and the better we can predict future states. Therefore, the closer we get to the Obvious domain, the better we can plan. The less we know, the more we must try out, to uncover and understand dependencies and constraints.

Fig. 2.
figure 2

Own representation based on [21]

The Cynefin Framework and its five domains Obvious, Complicated, Complex, Chaotic, and Disorder.

Unique and novel inventions are related to the unordered domains Chaotic and Complex that are the domains of novel and emergent practices, whereas according to Kurtz and Snowden [21], incremental innovations are mainly located between the boundary of Obvious and Complicated. The classification into these two domains also means that a mere selection of solutions is not enough, but that problem and solution understandings must be developed creatively. Therefore, the basic principles of creativity must be respected in the approach.

Creativity

The creative performance is determined by domain knowledge and cognitive flexibility as the two central factors on the cognitive level [8]. This has to do with the fact that creativity means combining the existing into something new. In case of humans, they are dependent on their knowledge and the flexibility to combine it. The more knowledge they have and the more flexible they are, the more possibilities exist for novel recombinations.

Experts have a lot domain knowledge. But it is memorized in a quite stable scheme. Therefore, they lose cognitive flexibility. Engaging in a dynamic environment within one’s domain attenuates the relationship between domain expertise and cognitive entrenchment [8].

What does that mean for software development? In SD we have at least the implementation and application domain. The standard case for software development is to have an expert from each of these two domains. Since the expert essentially has the domain knowledge for his domain, but not for the application domain, his creative performance for the application domain is limited.

Having knowledge from both domains is important to align the implementation to the core values of the application domain. But it takes a lot of effort for the implementation expert to dig deep into the application domain.

Especially, if you take a cross–project perspective, it becomes clear that a lot of effort is put into getting the implementation expert to learn about the application domain.

Preparing for Adoption

In addition to the first and second criteria (past and present perspective), the third criterion (future perspective) for radical innovations should also be considered in the approach. Even if an innovation cannot be predicted, it is important to know the attributes that influence the adoption of innovations. According to Rogers [35], innovation means that something is considered new by an individual or a group. It is irrelevant whether individuals or groups already exist who no longer regard it as new. Not even the time is important for this. It is only about the subjective perception of individuals or groups, whether something is regarded as innovation/new or not. Innovations do not spread arbitrarily or abruptly but are subject to a certain lawfulness. The process that describes this is defined by Rogers as Diffusion of Innovations and consists of four main elements. An (1) innovation is communicated through certain (2) channels over (3) time between the members of a (4) social system. In our paper, the first main element is particularly interesting, as it describes five attributes which have an influence on the product and the development process:

  1. 1.

    Relative Advantage is the degree to which an innovation is perceived as better than the idea to be replaced. At this point, it is not important whether the innovation provides objectively large advantages, but whether it is perceived as advantageous by the individual. The more advantageously an innovation is perceived, the faster it is accepted.

  2. 2.

    Compatibility is the degree to which an innovation is perceived to be compatible with the current value system, past experiences and needs of the adopters. If an innovation is incompatible, an adoption often requires a new value system, which is a relatively slow process. Therefore, compatible innovations are accepted faster than incompatible ones.

  3. 3.

    Complexity is the degree to which an innovation is perceived as difficult to understand and use. Innovations that are easier to understand spread more rapidly than innovations that require the adopter to learn new skills and knowledge.

  4. 4.

    Trialability is the degree to which an innovation is experimented with to some extent. An innovation that can be experimented with represents less uncertainty for the individual.

  5. 5.

    Observability is the degree to which the results of an innovation are visible to others. The easier it is for individuals to see the results of an innovation, the more likely they are to adopt them.

Regarding the development of unique and novel software applications, this means that the more tangible they are, the better these innovations can be assessed. Depending on the compatibility, it could take a longer time till people accept and give positive feedback. The more incompatible it is, the longer it can be acceptable that people don’t like it. Therefore, the goal must be to build tangible prototypes or software applications as quickly as possible so that the relative value can be assessed at an early stage and with less bias.

Special Features of Software for Prototyping

As Boehm [5] points out in his summary of past software experiences, software development has always been in the continuum between “engineer software like you engineer hardware” and software crafting. The former means a process where everything is preplanned to ensure the quality before the first execution in the actual context of use. The second corresponds to a process of experimenting and working with rapid prototypes even in the actual context of use.

Focus of the development was from the 1970’s to the early 2000’s, especially the first part of the continuum. There were many reasons for this, such as contract design or infrastructure costs (e.g. testing, operation, distribution). The result, however, was an environment that is not beneficial for experimentation.

Traditional software is written in one technology and made to run on a system with shared libraries and fixed hardware. This leads to side effects if for example several versions of a shared library are required, or multiple applications require the same resource. Because multiple applications share a not isolated operating environment, changes can result in an unstable system. Therefore, changes are seldom made on such a system. In addition, a manual distribution, as is usual with such systems, leads to higher effort and higher risk (cf. Knight Capital’s bankruptcy due to incorrect deployment [39]).

Depending on the complexity of the already implemented code, it could be that a switch in technology becomes too expensive, because everything must be transferred at once. In addition, polyglotism is usually not possible in relation to technologies. This leads to the fact that despite more suitable concepts in other technologies, the concepts and constraints of the initially selected technology must be preferred.

Using sequential, phase–oriented software development, software is usually implemented with a point–based engineering approach (cf. Denning et al. [9]) that results in a large overhead compared to set–based concurrent engineering as soon as changes must be communicated (cf. Ward et al. [44]). This overhead resulting from changes can make it seem unfeasible to integrate insights from experiments. In combination with figures about the relative cost of changing software (cf. Stecklein et al. [42]), it has also manifested the image that it is only possible to implement one solution at a time.

In summary, this inhibits experiments as follows:

  • Risk to change a running system is high

  • High effort to host several alternatives at the same time

  • Integrating findings is associated with a high level of effort

  • Technology decisions from the past limit the ability to make decisions in the future

Fortunately, this has changed since the advent of agile software development in the early 2000s. Also, technologies and approaches like Cloud Computing [1], Containerization [31], DevOps [40] and Microservices or Evolutionary IT Systems [9] have a positive impact here.

We provide further details how we encourage to use these for experiments in Sect. 4.4.

3 Foundations for Our Solution

From the previous section it becomes clear how important Design Thinking is for the development of unique and novel software applications. Above all, diverging and converging thinking (cf. Transitions in Cynefin [21]) as well as working with prototypes (cf. Preparing for Adoption in Sect. 2) are essential in order to find yet unknown interacting dependencies and constraints. However, once we enter the ordered domains (Obvious and Complicated) according to the Cynefin Framework, it is better to make decisions based on analysis or categorization than probing and acting (cf. Fig. 2). Therefore, the solution should be limited to the transition from the unordered domains to the ordered domains in order to give priority to the established methods there.

Transition means a continuous improvement of the understanding, whereby at the beginning there is a very incomplete understanding. Therefore, less properties of the final product are needed at the beginning, but more probing with different cheap solutions is needed. As a result, other media are more suitable as functionally integer software for use in the early phases. For example, paper prototypes can be produced much faster and cheaper if they are not to be all–inclusive or if interaction is less important. This is also pursued in set–based concurrent engineering in automotive engineering, where clay models instead of finished car bodies are the starting point [44].

Fig. 3.
figure 3

Prototype Levels. Own representation based on Mayhew et al. [18]

Which general properties a prototype can have and how they can be reduced are described by Houde et al. [18] (cf. Fig. 3). The levels of prototypes they describe are Value, Technical, Look & Feel, and Integration. Houde et al. called Value Role in their paper, which translates to “what an artifact could do for a user”. We find that a better description for this is Value, according to the idea of Value–Based Software Engineering [4]. In this context Value is not a financial term, but is meant as “relative worth, utility, or importance”. The Technical or Implementation Level is for “answering technical questions about how a future artifact might actually be made to work”. And the Look & Feel Level to “explore and demonstrate options for the concrete experience of an artifact”. The last level is Integration, which can be an integration of properties of two or all three levels.

Unfortunately, Houde et al. [18] do not describe how to navigate through the levels. Therefore, we use the recommended user experience design process proposed by Mayhew [27] (cf. Fig. 4). They suggest that the first thing to be determined is the utility as it is the prerequisite of a “great [...] user experience”. This allows a goal–oriented development and minimizes the risks of changes on the functional or technical level (cf. Stecklein et al. [42] and point–based engineering [44] for the cost effects of changes on those levels.). It also coincides with the basic ideas of Value–Based Software Engineering, which sees value (e.g. utility) as guidance providing and a shortening of the consideration of value as the cause of most software project failures (cf. Boehm [4]). For this reason, our approach also starts at the value level, which includes usability and persuaviness (cf. Sect. 2).

Fig. 4.
figure 4

Own representation based on Mayhew [27]

Recommended user experience design process.

In contrast to the recommended user experience design process, we do not see the need that Functional Integrity must follow subsequently Graphic Design. Due to the greater adoption of the Model–View–Presenter (MVP) architecture pattern (cf. Potel [34] and Fowler [12]) in software technologies (e.g. .NET or Angular) the presentation layer got more separated from the logic layer (e.g. in comparison to MVC). This allows a largely independent development of these levels (Technical and Look & Feel related to the prototype levels), which is why they can be developed in parallel.

As a result, for navigation through the prototype levels, Value must first be identified. In the next step, Technical and Look & Feel integrated with Value can be examined in parallel. Finally, integration is achieved across all three levels.

4 Solution Idea: Insight Centric Design and Development

It is from the combination of considerations from the previous sections that we ultimately developed our software development approach Insight Centric Design and Development (ICeDD) (cf. Fig. 5) to handle the challenge of developing unique and novel software applications.

Fig. 5.
figure 5

Integrated Process for ICeDD based on [18, 19, 21, 23, 27]

Insights is intended to emphasize that this approach, like qualitative research, is mainly concerned with the reconstruction of meaning and the understanding of the problem and solution space. The focus is on creating unique and novel software applications in terms of Value and not Technical or Look & Feel (cf. Sect. 3). Incremental improvements like in human–centered design (cf. Sect. 1) and reliable measurement is outsourced to the (5) Optimization stage, which is the final stage. The other four stages in ICeDD are (1) Initialize Design Thinking, (2) Execute Design Thinking with Non–Software, (3) Prepare Design Thinking with Software, and (4) Execute Design Thinking with Software. We have divided Design Thinking into (2) Execute Design Thinking with Non–Software and (4) Execute Design Thinking with Software in order to take advantage of the non–software benefits, especially at the beginning when the problem and solution space is expanded by building many different solutions.

4.1 Stage (1): Initialize Design Thinking

The result of the process is highly dependent on an adequate Design Challenge. According to the Stanford d.school [41], the Design Challenge frames the process and should not constrain to one problem to solve nor leave it too broad which gives troubles in finding tangible problems. Ideally, it should include multiple characters, problems, and multiple needs of the characters, with the characters, problems, and needs in themselves being similar.

This is good to evaluate a Design Challenge in retrospect but helps only to a limited extent in finding such a challenge. To find appropriate Design Challenges and therefore initialize our approach we propose the two possible paths On–Site Feature Requests and Feature Requests from Systematic Analysis.

On–Site Feature Requests is the idea of users stating asynchronously requests for improvements in a structured way. The necessity for this lies in Tacit Knowledge (cf. Gervasi et al. [14]) and the fact that certain knowledge is hard to recall without specific cues (cf. Gervasi et al. [14] and Benner [2]). It is therefore important that users can define such requirements from their work context. In [38] we proposed a tool–guided elicitation process to empower users to do such requests in a structured way.

The other path of Feature Requests from Systematic Analysis is based on the analysis by an external person. Since our context does not allow a mere categorization or analysis (cf. Sect. 2), a traditional systems analysis with the aim to examine an existing process in order to optimize it is not useful. Instead, we need a theory–generating approach that makes it possible to find unknown problems or solutions as well. In [28] we adopted Grounded Theory for software development to create such a theory generating approach.

With results from these two paths, a Design Challenge is created to start swarming by finding attractors and evaluate them with the help of Design Thinking.

4.2 Stage (2): Execute Design Thinking with Non–Software

In this stage Design Thinking with the help of non–software prototypes and Design Challenges from the previous stages is carried out. The goal is to explore the problem and solution space with non–software prototypes to get a better understanding before software is used as a medium. Since Design Thinking is a methodology, it must be initiated according to the conditions (e.g. duration or stakeholders).

In our projects we have initiated Design Thinking as a one–day workshop format in which both developers and users participate. This workshop sensitizes the various stakeholders to each other and generates initial ideas. In a further step, these ideas are refined by the Value Designer (cf. Sect. 4.6) in coordination with the respective stakeholders.

The possible solutions should be reduced in this stage to at least two, but not more than five solutions. Result of this stage are non–software prototypes optimized on a Value Level and their documentation. Since the prototypes are more abstract than it is necessary for an implementation, they still must be prepared for implementation.

4.3 Stage (3): Prepare Design Thinking with Software

Overall goal of this stage is to refine the prototypes on a Technical and Look & Feel Level and transfer them into requirements that can be used in a software development process (Prepare Integration). The task of the refinement on the Technical and Look & Feel Level is not to come up with novel solutions regarding these levels, but to align already existing solutions to the discovered value propositions from the previous stage. The Value Designer (cf. Sect. 4.6) supports the designer (Look & Feel) or the software developer (Technical) to ensure that the value is not lost.

The next step is to integrate all three levels (Value, Technical, and Look & Feel) on the requirements level (cf. Fig. 5). The requirements should be specific and understandable but not include all underlying decisions to not overburden the developer. Nevertheless, it should be possible to understand the underlying incentives if necessary, in order to understand freedoms and make adjustments. Therefore, the requirements should be linked to the sources to allow traceability.

4.4 Stage (4): Execute Design Thinking with Software

Using Design Thinking means evolution of the problem and solution understanding, experimenting with alternatives, and short learning cycles. Challenges in software development that arise from these have been listed in Sect. 2. In order to overcome these, an adapted software development approach is required. To adapt such an approach, the 4P’s (cf. Fig. 6) described by Jacobson et al. [19] are quite useful:

“The end result of a software project is a product that is shaped by many different types of people as it is developed. Guiding the efforts of the people involved in the project is a software development process, a template that explains the steps needed to complete the project. Typically, the process is automated by a tool or set of tools.”

Fig. 6.
figure 6

The 4P’s People, Project, Product, Process, and Tools from the Unified Software Development Process [19]. Own representation.

This means that it is not appropriate for a solution to consider only the process or only the product characteristics, since all the 4P’s depend on each other. For example, product properties to enable incremental development may represent an overhead when a strictly sequential process is required (e.g. for legal reasons). Furthermore, tools can be needed to make a process practicable at all. Or people carry out processes differently because they do not match with their mindset. Therefore, we describe in this section for all 4P’s the characteristics, which makes Design Thinking with Software possible.

People. It is important for the persons involved to understand that the artefacts created in this stage do not necessarily remain as they are and are partly discarded. Since at this stage the development of an understanding of the value comes first, people must be able to concentrate on the properties of the application that are necessary for the valuation of this application. A mindset with which the perfect application is to be developed immediately is not beneficial at this point. There should be a basic understanding of experimenting. Otherwise the same requirements apply as in agile software development.

Product. As Denning et al. [9] emphasize, traditional preplanned development focus on architectures that meet specifications from knowable and collectable requirements, do not need to change before the system is implemented, and can be intellectually grasped by individuals. This is not compatible with our requirements for Design Thinking, which is why we need evolutionary architectures instead.

Evolutionary architectures are designed for continuous adaptation through successive rapid changes or through competition between several systems [9]. Ford et al. [11] describe how evolutionary architectures can be achieved through appropriate coupling and allowing for incremental change. Patterns like Model–View–Presenter [34] separate the UI logic from the business logic and therefore enable an independent development of the UI from the Backend. Event–driven architectures and Microservices allow more loosely coupled smaller components that can be polyglot regarding technology.

Event sourcing [13] is particularly interesting in this context, as it enables parallel models, reconstruction of model states, and synergy effects, e.g. for the collection of interaction data. With parallel models the operation of different versions can be enabled on the one hand. On the other hand, data models can be adapted with less consideration of side effects. Model state reconstruction is useful for troubleshooting, preparing test environments, and data recovery.

In summary, evolutionary architectures help to provide fallback variants, increase reliability, develop individual components independently, and reduce the complexity of the individual components. By using synergies with event sourcing, it reduces the effort for data collection in experiments.

Process. The requirement for this stage is the rapid implementation of software alternatives and conducting of field experiments. In the previous stages an understanding of interacting dependencies, constraints, problem space, and solution space has been deepened. Therefore, the requirements at this stage should be stable to the extent that changes no longer occur in such a likeliness that cycles of several hours are useful and necessary. For this reason, these cycle lengths do not have to be supported in this stage and longer cycles can be considered, as in agile software development (cf. Terho et al. [43]).

DevOps including Continuous Deployment in combination with Agile Software Development is very suitable for this. According to Sharma et al. [40], DevOps’ goal is to accelerate and increase the frequency with which production changes are made available in order to receive feedback from real users as early and frequently as possible. Consequently, these processes ensure the timely availability of alternatives for experiments.

If you compare the Build–Measure–Learn Cycle from Lean Management (cf. Poppendieck and Poppendieck [33]) with the principles of the Agile Software Development (cf. Meyer [29]) it stands out that Agile Software Development is focused on an acceleration of the Build step to increase the frequency of user feedback. The parts Measure and Learn and their expression are missing. However, these are indispensable in order to learn from their use.

To improve this situation, the implementation is embraced by a process for conducting experiments. At this stage, field experiments are to be used instead of controlled experiments. The advantage of field experiments is the high external validity. We need this as we are still in a state where our mission is understanding and for that we need to reconstruct meaning or subjective perspectives from the usage in production. The disadvantage of field experiments is the usually lower reliability compared to controlled experiments. At this point, however, controlled experiments would not be appropriate, since they require a good knowledge of dependencies and interfering variables.

Project. For each Design Challenge that ran through the third stage, a project is initialized. The aim is to understand which of the problems and solutions found lead to a value.

Tools. In order to make experimentation with software solutions more feasible, automation and assistance tools are needed. The use of evolutionary architectures, as described in product, in combination with containerization [31], cloud architectures [1] and a build server for the continuous deployment pipeline enables independent automated deployment as well as roll–back of the individual alternatives, versions or components.

To make them available during an experiment a user specific online orchestration tool should be present. Its task is to orchestrate (e.g. by rerouting or feature toggles) the different alternatives, versions or components individually for each user. An opt–out for the user is essential so that he can continue to work productively with the system in the event of errors or malfunctions.

Since data (e.g. interaction data, surveys or free annotations) accumulate in experiments, a tool is required in which these can be stored, aggregated, and analyzed.

Finally an experiment management system is needed, that guides the users of ICeDD through the experiment design and controls the tools for orchestration and data accordingly.

4.5 Stage (5): Optimization/Incremental Improvement

The main objective of ICeDD is to better understand the problem and solution space in cases of little or no knowledge. Once the fundamental interacting dependencies and constraints have been understood, it is possible, for example, to design a controlled experiment to learn in detail how the solution can be further optimized. Without this, it would not be possible to eliminate interfering factors and explain why the experiment has a valid operationalization. Of course, other methods that are suitable for incremental innovation (cf. Norman and Verganti [30]) can also be used for optimization.

4.6 Roles

In ICeDD we have the normal software development team as in Scrum as well as users. The difference is that instead of a product owner we have a role called Value Designer. Her main purpose is to ensure that intentions of the stakeholders are met, and value is delivered with the software application. In terms of Value Based Software Engineering [4] the Value Designer is responsible to identify all success critical stakeholders and to mediate between them. Therefore, she neither must be a domain expert nor a technology expert but needs considerable knowledge of both sides. She is the only role involved in all stages as well steps of ICeDD.

5 Research Method

Inspired by the ideas of Action Research from Lewin [22] and Grounded Theory from Glaser and Strauss [15] the concept of ICeDD was developed iteratively and incrementally by acting in a real–world context, observing the effects and reflect on them with the help of a literature research. The foundations for this are the three practical projects Firefighter Training System [36], History in Paderborn App (HiP–App) [37], and Zentrum Musik Edition Medien (ZenMEM) [28].

The Firefighter Training System was a one–year project from which it was realized that the Trialability and Observability of an innovation is important for the assessment of its value by users (cf. Rogers [35]).

HiP–App and ZenMEM have been running in parallel since 2014. While the HiP–App is developed by a group of constantly changing students (up to 20), ZenMEM is developed by a group of up to five permanent developers.

The setting of the HiP–App makes it particularly suitable for trying out processes and process changes. Therefore, every semester new building blocks (e.g. Scrum, Scaling Scrum to two teams, Continuous Deployment) were introduced in the HiP–App to identify possible challenges from their use and to explain and resolve them with the help of a literature search in corresponding fields. The concepts developed here were transferred to the tool development in ZenMEM in order to test them in an additional context. This has allowed us to move step by step closer to the ICeDD approach presented here.

6 Related Work

There are some approaches that use Design Thinking as a Front–End Technique (cf. [16, 17, 26, 46]). They stop with the milestone x–is finished prototype and use only one solution for the software development. The limitation of this approach is that there are no experiments in the real world with software, especially not with several alternatives like in our approach. To uncover certain dependencies or constraints like in the Netflix example in the introduction, at least two alternatives must be compared in an experiment in production.

This leads to controlled online experiments. Lindgren and Münch [25] surveyed the state of experiment systems in software development and list several examples for controlled online experiments. The goal of such experiments is as well as in Continuous Software Engineering to get to small incremental quality improvements [10]. The overall goal of ICeDD is to find unique and novel software applications regarding value. Therefore, ICeDD’s research mission is to build understanding of the problem and solution space in order to find such a software application. For this reason, the focus is not on a mainly quantitative, but on a qualitative approach to the subject.

7 Discussion and Further Work

In this paper we presented the outline of a value–centered approach for unique and novel software applications called Insight Centric Design & Development by adapting Design Thinking into a mixture of Front–End Technique and Integrated Development Philosophy. This is not solely based on considerations but has been created iteratively and incrementally by acting in real world contexts, observing the effects and reflect on them with the help of a literature research. It has turned out that a consideration of Design Thinking with software on a purely process–related level is not enough. Rather, for a successful application, the product properties, people and tools must also be considered. Otherwise, too many factors can inhibit the necessary experimentation with software alternatives in production.

Some parts of the approach have already been partially implemented by us. In order to be able to evaluate the feasibility, we still lack above all tools which makes experimenting with software more viable. Therefore, the mentioned experiment management system, user–specific online orchestration, and feedback collecting and analysis system will be investigated in future work.