Keywords

1 Introduction

There is ample evidence that being physically active provides substantial health benefits [1]. However, to stimulate people to be physically active is one of the most challenging problems of these days. Individual characteristics such as knowledge, social support and motivation but also modernization trends, and environmental and economic conditions influence physical activity [1, 2].

In recent years reducing physical inactivity and enhancing active lifestyles have been an ongoing topic in Human-Computer Interaction (HCI) and have increasingly become a field of application for low-cost technology, including mHealth and eHealth applications. In the past five years, the market of so-called wearables has grown exponentially, partly due to developments in IT and sensor technology [3, 4]. Developments in low-cost technologies have created vast opportunities for health improvement and facilitate behaviour change [5] but are still insufficiently used to promote sustainable physical activity [6]. One of the possible reasons for this would be that engineers, when designing wearable devices for measuring and promoting physical activity (such as trackers), mainly focus on the technological aspects [7, 8]. This leads to technically ingenious products with advanced functions suitable for data collection. However, the impact of these wearable technologies on health is therefore also limited [9]. Moreover, wearables are often no longer used after just a few months [7, 9].

To prevent this, technological aspects will have to integrate with individual, social and environmental aspects. A personalized approach is absent in most consumer available wearables because limited non-technological expertise is consulted and the user is often not properly involved in the design process [7]. For example, the specific knowledge of various experts, such as behavioral scientists and human movement and health scientists, is often not mutually incorporated in the development of wearable devices [10]. Janssen et al. (2018) argue that designing health-related products for a heterogeneous group of people requires an interdisciplinary design approach [8]. To enhance the success of problem-solving the participation and cooperation of individuals with different knowledge and expertise is required [11]. Interdisciplinary research and development can transfer knowledge, insights and findings between disciplines [12]. Yet, a collaboration between disciplines is not straightforward and needs special attention. Although the need for interdisciplinary teams has increased, researchers and developers have become more specialized over the past years [12]. Given the speed of change, technological possibilities and different user needs during the research and development of wearable technology, there is often not enough time to build a stable, interdisciplinary team [11,12,13]. Often members of interdisciplinary teams have different ways of communicating, have different definitions for the same words per their expertise and might have a mental model in which they fully or partially disregard other disciplines as they perceive their expertise as the better one [11, 12, 14,15,16]. This leads to sub-optimal outcomes.

In this article we present COMMONS a tool developed to enhance interdisciplinary collaboration in the design of health-related wearable technologies. This board game approach provides a playful situation during which different experts can learn, in a safe environment, to share expertise, knowledge and skills. COMMONS brings them up to speed with each other’s definitions, ways of thinking, and showing what’s important to them. The goal is to facilitate an interdisciplinary team with the opportunity to progress through their design cycle earlier, more effective and more qualitative.

2 What’s in a Game

In the last decennia, the use of game elements is becoming increasingly popular [17]. Simultaneously, there has been a resurgence of board games. Many new kinds of board games, with different mechanics, themes and gameplay, have been introduced and become popular [18]. Games have several general characteristics in common [19]. They are often a simulation of a real event or situation but have little real-life consequences (i.e. it is not real, it is play). Next, games usually have a location (board) and a duration of play. Players compete against each other or the game, individually or collaborating with other players. Further, games often require a certain amount of skills, be it physical or mental, and playing the game improves these skills. Moreover, games usually have an element of chance and a level of uncertainty. If it is obvious who will win at hand, it is no longer fun to play [19].

Serious games go beyond entertainment. They are developed to have a certain impact on behavioral, motivational or attitudinal outcomes which are widespread linked to learning outcomes [20, 21]. In the last decennia, the use of these kind of games is becoming increasingly popular [20]. Besides this we see that board games for entertainment use serious subjects more often in their design.

3 Design of a Board Game for Interdisciplinary Collaboration

Our aim was to develop a board game as a tool to enhance interdisciplinary collaboration in designing wearable technology. The game focuses on communication and value of skills and expertise with an output as a result. First, we identified interesting mechanics in existing games like Dixit (2008) and Codenames (2015) that we believe can achieve this goal in our board game design. We have not limited our research and development to these games, but they have provided us with a starting point for our iterations and user testing. In playtesting with the co-authors, we played Dixit and Codenames in a purpose-shifted setting in which the clues were focused around wearables (i.e. activity tracking devices). The authors have a background in different disciplines (i.e. human movement sciences, behavioral sciences and industrial design), and therefore value properties of activity tracking devices differently. By playing these games, we recognized we started to understand each other’s expertise, and furthermore were trying to think like the other.

Second, we developed a paper-prototype of two games with similar game characteristics as Dixit and Codenames. In a second playtesting we tested these games: Marketplace and Cooperation. We found that certain characteristics of the first developed game (Marketplace) were counterproductive to the goals of the game: improving communication, shifting mental models and valuing each other’s knowledge and skills. Participants were playing tactical, to win the game, instead of cooperating. An important lesson was that players must collaborate to win the game.

In the second developed game (Cooperation, see Fig. 1) the intended objectives were approached. Within the game, players are part of a development team and must develop an activity tracker based on various features. Each round consists of discussing a card with a feature on it. Players jointly determine whether or not this feature is used in their activity tracker to be developed. When there is no consensus players have to roll a dice, and there is a disruptive element we call ‘The CEO’. This disruptive element are cards with a command that change the state of the game. Important lessons that we learned were that players cannot have the exclusive right to reject a card and must be actively involved in taking a decision. Furthermore, since we are focusing on improving communication, we need to clearly define what is effective and what is not. Communication involves multiple facets such as frequency of communication between individuals, the context of what is communicated, the role of each within a team, and the aforementioned differences between experts. As such, measuring communication can be difficult [23].

Fig. 1.
figure 1

Cooperation

We built the next iteration of the Cooperation game based on the acquired knowledge and experience gained from the previous steps. We call this game COMMONS (see Fig. 2), which refers to shared resources in which each stakeholder has an equal interest [24]. Players of COMMONS are part of a development team, composed by a fictitious woman who calls herself ‘Kairos’. The assignment for this team is to develop an activity tracker based on various features and focused on a persona. This persona consists of a description of a person’s interests, motivation, character and goals in relation to physical activity. Each round consists of a player reading a card out loud with a feature (related to hardware, software, user experience design or behavioral change technique) on it. Players then have to vote according to the consent method. We incorporated this so every player is actively involved in the decision making. Consent means that a decision has been taken when none of the players present argue or predominantly object to taking the decision. Meaning: only when there’s agreement, there’s a decision. In this case a player presses the ‘green’ button if he wants the feature to be a part of the activity tracker, the ‘blue’ button if he gives consent (neutral) and the ‘red’ button if he has a predominant objection.

Fig. 2.
figure 2

COMMONS (Color figure online)

A result of the voting there are three options:

  1. 1.

    All players press green or blue and the card is accepted and will be placed on the board.

  2. 2.

    At least one player presses red, with the rest of the players pressing either red or blue and the feature has been rejected. The card goes onto the discard pile.

  3. 3.

    At least one player has voted green, with another voting red, resulting in no agreement and thus discussion.

When option 3 occurs, there is a minute to discuss the overriding objections. The players have to explain their arguments and discuss them with the others. A better understanding of each other’s position arises and there may be a shift in one’s view and choice. Then the second round of voting occurs and again the same three options are possible. If option 3 arises again the card will then be placed in the ‘discussion box’ and will not be placed on the board. When a card is placed on the board there are 5 positions in which the card can be placed. Position 1 is most important to the players, position 5 the least important. If a card is placed all other cards move one position. So, if a card is placed on number 4 this card shifts to number 5 and number 5 falls off the board and will be placed in the ‘nice-to-have box’. The players have limited time to discuss the placement of the card. They must agree on the position of the card but if time runs out, a single player decides where the card will be placed. Because, at that moment, there is no proper consent, the dice must be rolled. ‘Kairos’ comes around when throwing a specific number of eyes. When this occurs, the players draw a ‘Kairos card’. These cards contain a command that changes the state of the game (switch cards, taking back a card form the ‘discussion box’, etcetera). This game characteristic works disruptively and causes unpredictability. The only way players can deal with this is by working together.

By facilitating choice and discussion, participants are actively involved in the process and they have to share their point of view. Participants gain insight into each other’s arguments, what is important to them and at the same time they are creating a mutual language and a set of definitions. Participants must compromise, and at the end of the game, they have created a joint solution.

4 Research Through Design Prototype

COMMONS is a research trough design prototype with elements for logging data (e.g. data on voting, time spent on voting and discussion, card positioning, etcetera) [25]. By logging data while playing we hope to gain insights into the dynamic process of choice, discussion and compromise between member of an interdisciplinary team while they work to accomplish common goals [26, 27]. We have set a number of variables that can give us information about these topics. To collect the intended data this prototype works with voting boxes, cards with RFID tags and RFID card readers in the board. The prototype registers rounds, time, votes, and card movements.

In Table 1 we give an overview of the data acquired based on a play session with four players with a different expertise. Player 1 has a background as movement scientist, player 2 is a UX designer, player 3 has a background in business and marketing and player 4 is an industrial designer. We developed 89 features, divided in four categories: (i) Hardware [15], (ii) Software [17], (iii) User Experience Design (9) and (iv) Behavioral Change Technique (BCT) (48). For this play session we selected 35 features (cards) and therefore 35 rounds. 9 features for Hardware, 7 features for software, 5 features for User Experience Design and 14 features for Behavioral Change Technique.

Table 1. Column A: round number, B: type of card, C, D, E, F: vote player 1, 2, 3 and 4, G: Consent or not, H: combination of votes; I, J, K, L: vote player 1, 2, 3 and 4 after discussion, N: combination of votes after discussion, O: card gets on the board, P: Round number card gets on the board, Q: Round number card gets off the board, R: Number of rounds the card is on the board, S: Is the card a feature of end solution, T: Highest position on the board

14 of the 35 features were directly accepted, 10 of the features were directly rejected and 11 of the features were the reason for discussion. These 11 features were discussed and after the second round of voting 5 features were accepted, 4 features rejected and 2 features were still reason for discussion. What stood out is the kind of property that is most accepted on the one hand and most discussed on the other hand. Eighty percent of the BCT features were directly accepted against none of the software features. 11 percent of the Hardware features were directly accepted compared to 40 percent for User Experience Design features. In the end solution there were 3 BCT features, 1 Hardware feature and 1 User Experience Design feature. When we look at the voting behaviour of the players (see Table 1), we see that there are notable differences between the players. Player 1 agrees on 11 of the 35 features to be a part of the activity tracker against 19, 15 and 12 for player 2, 3 and 4. The inter-player differences in disagreeing with a feature are smaller. Player 1 disagrees on 12 of the features compared to 13, 14 and 15 for player 2, 3 and 4. Player 3 stands out when voting after the discussion. In these cases, he agrees on 6 of the 11 features compared to 3, 3 and 4 for players 1, 2 and 4.

Features that were voted to be a part of the activity tracker were on average 7 rounds on the board. One of the BCT-related features ‘Discrepancy’ was on the board for 8 rounds but was not part of the final solution. Two other BCT-related features ‘Goalsetting behaviour’ and ‘ Social reward’ were both on the board for 6 rounds but were also no longer on the board at the end of the game.

5 Discussion and Future Work

What we have learned so far is that COMMONS facilitates the conversation between different disciplines. Due to the structure the game offers, players are encouraged to talk about the content without having unproductive debates of expertise or loss of time by talking about a feature for too long. Researchers, designers and developers who work within HCI often work from a multidisciplinary point of view. COMMONS, and the ideas behind it, could be an addition to existing toolboxes and methodologies.

It is currently too early to draw conclusions from this data. Yet, it is striking that most ‘software’ features were directly rejected and were not part of the end solution. The background of the players might have played a role here. Also the voting behavior of player 2, 54% directly agree, and player 3, 55% agree after discussion, is notable. Although the features were divided proportionally over the rounds, in the second part of the game (round 19 to 35) the players rejected 7 features compared to 3 features in the first part of the game. Whether this is due to a better understanding of each other’s position or a more common language is too early to conclude.

In the next research phase, we will make adjustments in game rules. For instance, the ‘Kairos cards’ must not be drawn too quickly but do have to be drawn more often and the commands must be less disruptive. For instance, players indicated that the command: ‘remove all cards from the board’ caused a depressed atmosphere. During the play session we experimented with longer discussion time and players indicated that they prefer this. This gave them more time to understand each other. We also saw that they shielded their voting boxes when pressing the voting button. Upon further inquiries, it turned out that players preferred to cast their votes anonymously after which the choices will be made public. This also prevents players from casting a vote based on a vote by another player. Furthermore, players pointed out that the cards must not contain too much information, because that blocks the possibility of discussion as the cards become too descriptive.

During the next iterations, the game is played with different kinds of groups. Participants will fill out a questionnaire that we have developed beforehand which contain questions relating to gaining insight in each other’s point of view and arguments. This allows us to map these variables to our data. In addition, play sessions will be recorded on video to also collect information about the content and quality of the mutual conversations, arguments and discussions. The first testing was recorded and we are exploring the possibilities to measure the quality of the communication. We know from literature that communication is important within interdisciplinary teams. The National Academies of Science [28] states that “At the heart of interdisciplinarity is communication - the conversations, connections, and combinations that bring new insights to virtually every kind of scientist and engineer’’ (p. 19). Also certain communication processes are important for good quality communication: spending time together, discussing language differences and shared laughter [16]. We can gain insight into these processes through observation with targeted items.

Once these adjustments have been made, extensive testing will take place with players from different expertises. In addition, we are curious about the differences between teams that do know each other, and those who do not. We aim at five different teams that know each other and five teams that don’t know each other. Given the context in which this game is developed (wearable technology related to physical activity), we want to question various areas of expertise related to this. This includes movement scientists, behavioral scientists, industrial designers, engineers, sport- and exercise coaches, and users of wearable technology.

Our goal is to facilitate an interdisciplinary team with a tool to enhance their progress through their design cycle earlier, more effective and with more quality. And perhaps, with more pleasure and fun as well.