Keywords

1 Introduction

Home automation is not a new idea [17], and can be traced all the way back to the 1850s, where Joel Houghton patented a device for automating the job of washing dishes [5]. Moreover, with the emergence of remote controllers, mobile devices, and the interconnectivity between different devices, home automation is becoming more accomplishable for the average Joe. However, if the development is only influenced by technology savvy individuals, we risk creating a system that may seem over complicated to the less tech savvy [2].

To avoid alienating the user, we sought out to create a user oriented design solution for a home automation interface. Using practices from Participatory Design (PD), we included users in all part of the design process [12]. Together we created a prototype for integrating and controlling household devices from a mobile device. The research questions were formulated as follows:

RQ1: How can a potentially complex home automation system be controlled through a user interface focusing on simplicity? And how can the different household appliances be visualized to provide an overview of the different devices in the household?

RQ2: How can principles from Participatory Design be employed for designing an interface for such a home automation system?

2 Related Work

Home automation is the concept of creating automatic behavior with and between devices in the home. Controlling these devices remotely can be seen as a subgenre to home automation. This project will concern a suggested interface to this purpose. Many of the current interfaces for home automation are complex, “engineer style interfaces” where the user navigates by using a flora of menus and buttons. During prototype design, the idea of using a graph interface emerged, where the users suggested that a graph could be suited for our project’s needs.

We reviewed several traditional interfaces for home automation: Mi Casa VerdeFootnote 1, HomeOSFootnote 2, Homemaestro [9], as well as an internally developed home automation system at Østfold University College [10]. We also explored the current issues of the home automation domain as a whole for example as discussed by Brush et al. [2]. An example of the topics presented by Brush et al. were the issues which actual current users of home automation systems were experiencing.

2.1 Graph Interfaces and Visualizations

Graph interfaces are common in our everyday lives. Things such as the London Tube map and road maps stimulate people’s ability to think in terms of graphs, and can be interpreted by users without a technical background [15].

The idea of using a graph for presenting information is not a new idea, in fact, it was estimated that 2.2 trillion graphs were published during 1994 [6], and it is very likely that this number is still growing. To the best of our knowledge however, the idea of combining a graph based interface with home automation is novel.

Although we did not find any relevant work regarding graph based interfaces for home automation systems, we did find other graph based interfaces and guidelines for graph interfaces which were of interest.

Freire and Rodriguez’s article Preserving the Mental map  [4] provided important guidelines when deciding how the user should be able to navigate the graph interface and which traits we should focus on when looking towards other graph based interfaces. The article emphasizes the importance of several aspects of graph interface design. One aspect is that the user should always be able to foresee the changes which they do to the interface. Furthermore, it is vital to preserve the user’s mental map of the interface, as losing a sense of direction means the abstract interface would fail.

This article influenced the light in which we discussed other graph based interfaces, such as Xenakis [1] and the Reactable [7]. Figure 1 shows the reactable application, where the user interacts with the direct manipulation interface by adding elements which become part of a beat.

Fig. 1.
figure 1

The Reactable app in use.

As previously mentioned, graphs are everywhere, but we could not find a graph interface used for home automation. We therefore based our prototype on existing applications within this field as well as existing graph interfaces intended for entertainment and other purposes.

3 Method

Our focus in this project has been to answer if a potentially complex home automation system can be controlled through an interface that would be easy for users to understand and use. As we aspired to develop the interface together with potential users, we employed Participatory Design (PD) to engage the participants. This section briefly presents the different methods used for developing and evaluating the prototype together with the participants.

3.1 Data Collection

As we wanted to involve the users in the whole process of developing the prototype, data were naturally collected throughout the project. These were collected through observation, interviews and questionnaires, using both quantitative and qualitative approaches. In the early phases of the project, users participated in a Future Workshop [14], a method from PD directed at enhancing the dialogue between designers and users. This session focused on concretizing issues with present day’s solution regarding home automation, and a brainstorming where the participants developed ideas regarding how this situation could be improved [8].

In the subsequent phases of the development, the participants took part in two evaluations of the prototype. However, it is important to point out that the participants did not participate solely to test the interface. The feedback from the participants in the two evaluations were crucial for continuing the development of the design.

One of the techniques used for gathering data during the evaluations was the Think Aloud method [13]. When interacting with the prototype, the users were asked to think aloud explaining their actions and thoughts regarding the prototype. Further, to attain more information about the users’ impressions of the prototype, we conducted individual semi-structured interviews after the participant had interacted with the prototype [3]. To gather quantitative data regarding the use of the prototype, the number of unique gestures was collected, making it possible to compare how the users interacted with the interface. Further, the data allowed for assessing different difficulties regarding the interface. Finally, to assess the prototypes usability, the users filled out a System Usability ScaleFootnote 3.

3.2 Data Analysis

Using techniques from Grounded Theory, we extracted concepts from the interviews and observations [18]. To analyze the results from the System Usability Scale, the participants’ scores for each question were converted as described by Jeff Sauro [11]. The scores end up on a 1-100 scale. A score above 68 is considered above average. Further, to make sense of the statistical data regarding the different user’s unique gestures, descriptive statistics were used to visualize the results as graphs [16].

4 Design Iterations

The process of creating and developing a prototype together with participants consisted of three iterations. The first iteration was based on a method from Participatory Design (PD), where participants took part in a Future Workshop. The second iteration focused on further developing ideas from the first session into a low fidelity prototype. The prototype’s viability was also evaluated in a preliminary usability testing. Based on feedback from usability testing in the second iteration, the third iteration focused on evolving the low-fidelity prototype into a high- fidelity prototype. Further, in the final iteration, the final prototype’s functionality and usability were evaluated as participants again took part in a usability test.

4.1 First Iteration - Future Workshop

To create the prototype in the spirit of PD, the users were included in the design of the prototype already in the early stages of the project. The participants consisted of four students between the ages of 23 to 25, representing both sexes. To gather ideas, criticism of today’s situation and possible solutions, users were invited to participate in a Future Workshop. In order to familiarize the user group with the project and the home automation domain, they first received a brief introduction to home automation. The participants were presented with already existing solutions and suggestions for possible interfaces. After the introduction, the users participated in a critique phase criticizing the current situation.

During the critique phase the participants expressed clearly that there had to be better ways of creating home automation user interfaces than the classic dashboard interfaces presented. After the critique phase, the participants described their vision of a home automation system. In a final phase, the users discussed how their previous ideas of a utopian home automation system could be implemented. During the Future Workshop, the participants proposed a design solution inspired by a graph. This idea was based on the belief that the interconnectivity between the different appliances could resemble a graph, where the nodes represented the different devices, and the edges represented the connection between different devices. Further, the different rooms in a home could be viewed as a graph, where the different rooms represented nodes, and connections between the different devices in the rooms represented the edges. The users suggested that this would provide a good overview of the home and the devices within it. One participant mentioned:

“There is a relation between these devices and they can communicate. Then, it can be considered a networks as well. A network with nodes, and relations between the nodes.” (Man, 24)

The idea of controlling the household not only as a remote control, but also by creating different rules for how the devices should behave was also conceived by the participants. In the implementation phase of the Future Workshop, users suggested drawing a line between the devices in the graph, to create rules which would automate behavior between the devices. The idea of using a graph, and rules were ideas implemented in the second iteration of designing the interface.

4.2 Second Iteration - Low-Fidelity Prototype

The graph based interface from the first iteration was later created as a low-fidelity prototype, consisting of mock-ups created with PhotoshopFootnote 4. The programming languages PHPFootnote 5 and JavascriptFootnote 6 were used to create a simple web-application containing a single frame for displaying the different mock-ups, see Fig. 2. These frames were enabled for user interaction through click, long click and drag.

Fig. 2.
figure 2

The low-fidelity prototype presenting rooms and appliances in a home.

To assess if the idea of using a graph would be a suitable way of presenting the different devices in a household, the prototype was evaluated in a usability test. Participating in the evaluation was six students between the ages 22 to 25, representing both sexes. Two participants had previously participated in the Future Workshop.

The test consisted of eight tasks, four of which focused on remote controlling home appliances, and four tasks that focused on creating rules between the different devices. After the tasks were finished, the participants were given a System Usability Scale form. After the users completed the form, a semi-structured interview was conducted for gathering qualitative feedback regarding the user’s experience of the interface.

4.3 Third Iteration - High-Fidelity Prototype

As the usability test in the second iteration yielded good results, a final implementation of the prototype was developed as an Android application. Section 5 describes the prototype.

In the evaluation of the final prototype, we gathered feedback from the users on how well the graph interface worked in illustrating a home, and if they had control of the devices. Participating in the evaluation was five students between the ages of 23 and 34.

The users were asked to solve a set of tasks that were designed to resemble everyday scenarios. Afterwards, the users were asked to participate in a semi-structured interview [3], and then to fill out a questionnaire. The results from the evaluation is described in more detail in Sect. 6.

5 Prototype

This section describes the high-fidelity prototype for Android, resulting from the previous iterations described in Sect. 4. When starting the application for the first time, the user sees a white screen with circular room nodes in the middle of the screen as illustrated in Fig. 3. The movement of a node is animated to clearly show how it reacts to interaction. Having support for multi-touch, the application allows movement of multiple nodes simultaneously. In this prototype, the nodes are split into two categories, representing rooms and devices. Although they look rather similar, they have both different visual details, and behavior.

Fig. 3.
figure 3

The initial positioning of room nodes.

5.1 Rooms

Once the user taps or drags one or more nodes to the middle of the screen, the other ones drift to the edge of the screen to give space to the nodes in focus. The active nodes can either be dragged to the center of the screen, where two invisible slots take hold of them, or they can be put at the edge of the screen to be placed with the rest of the room nodes. The two invisible slots at the center of the screen “expands” the nodes, making the corresponding device nodes visible. The two slots reduce the space required in order to display all the devices, and make it possible to create connections between devices in different rooms.

5.2 Devices

When visible, the device nodes are drawn around their corresponding room nodes, connected by straight edges to illustrate their parent room node. In order to interact with the device nodes, three different gestures are used. The device nodes can be tapped to toggle its on/off state or to toggle the visibility of its child nodes. This depends on which type of device node the user taps.

The device nodes are split into three categories. The first category is for devices with binary states, which can either be turned on or off. When turned on, the outer ring of the node is colored green instead of grey to indicate the node’s state. This, as well as the following category is depicted in Fig. 4.

Fig. 4.
figure 4

Device nodes from the high-fidelity prototype. The device node representing a coffee maker on the left can be toggled on and off. The device node for light on the right can be dimmed.

The second category handles floating values and enables the user to select a value within a given range. It is distinguished by its fragmented outer circle and its additional outer arc for indicating the selected value through its length. This node enables the user to perform tasks as for instance dimming lights and adjusting the volume of a music player. The gesture for adjusting the value of these nodes is a swipe-gesture where the user swipes from the inside of the device’s circle to the outside. When the user’s finger exits the circle, the node’s text changes to display its value between 0 and 100 %. This value is given by the finger’s relative angle to the top of the node with respect to its center. As the value displayed inside the device node is continuously updated, it allows the user to fine-tune it with more precision by moving the finger further away from the node. When the node is tapped and turned off, the current value is saved in order to be restored when the device is turned on again.

The third category allows devices to express multiple functions as child nodes of the corresponding device node. A stove for instance will often have multiple hot plates. These can then be illustrated as child nodes placed around the stove’s device node. The child nodes of a device node are allowed to use icons instead of text to describe themselves. An example of this device node category is depicted in Fig. 5.

Fig. 5.
figure 5

The device nodes of the kitchen in one of our scenarios.

The interface enables users to create connections between nodes in order to apply rules for how the devices should affect each other. A user can for instance create a rule for instructing the light in the kitchen to turn on when the coffee maker starts brewing. These connections can be created by long pressing a node, which will cause it to detach from its fixed position around its room node, and dragging it on top of the node to be influenced. As the node is detached, the device which the interface runs on will vibrate and the connectable device nodes’ border becomes dashed as a visual cue to the user that the dragged node can be dropped on top of them.

6 Results and Discussion

In this section we present and discuss findings emerging from the interviews, questionnaires and observations in the process of developing the prototype together with the users.

6.1 The Interface

During the future workshop, the users suggested a graph design for the prototype. The suggested solution showed potential for visualizing the devices in a household and providing control over a home automation system. Further, the solution seemed to be a promising improvement over the “engineer style” interfaces presented in Sect. 2.

When asked if the users felt in control, the participants expressed that the interface’s features made it easy to feel in control over the different household appliances. In regard to the design, all but one of the users responded that the graph worked well as a way of visualizing the devices in a home. One participant explained that he found the node structure much more playful than traditional directory structure often found in other systems. As the participant explained:

“This type of node structure is a lot more playful than a directory structure. If you look at Windows or Linux, you’re always moving downwards into things, but being able to move across and outside of those things, and creating your own rules and connections - I like that a lot, I’m a fan of that train of thought.” (Man, 24)

When looking at the time and number of clicks the participants used to solve the scenarios in the user test, we observed that the number of interactions were similar between the participants. However, they used different gestures when solving the tasks. Further, there was little variation between new users and users who had previously been involved in the development.

During the evaluation of the prototype, several participants commented on how smooth it felt to create rules to connect the devices together. Users agreed that the rule functionality added more value to the application, and that it would be less interesting if it worked only as a remote control. Further, the users responded positively to the haptic feedback provided by the different gestures. As one participant stated:

“I also like the idea of playing with the node and pulling it around.... It’s the good feeling of pulling something around. Physically grabbing the node.” (Man, 34)

In addition, the scores from the SU scaleFootnote 7 gave an indication that the overall usability of the interface was good. The average from the questionnaire was 84. A number above 68 is considered as above average usability.

6.2 The Use of Participatory Design

The participants’ ideas, suggestions and experiences with the prototype was throughout the project an invaluable drive for developing the interface. By including the participants throughout the whole design process, we were able to gather valuable data that might not have been possible to obtain otherwise.

Due to limited time and resources, the group of participants consisted of students and PhD. candidates from the Faculty of Computer Sciences at Østfold University College. It is possible that the suggested graph based design is biased towards users with technological backgrounds. However, our findings suggest that less tech savvy users would be able to use the prototype satisfactory, as people interact with networks and graphs on a regular basis. Not having tested on users without a technical background in this project, it is difficult to foresee how a graph interface in this context would be perceived by the users in general. Nonetheless, the participants supported the use of a graph based interface when asked if they foresaw any problems with family members using the application. In unison, the users thought the concept of the prototype was sufficiently easy for anyone to learn.

Further, we purposefully involved two of the same users in several design iterations. This allowed us to compare gathered data from novice and seasoned participants to see if they behaved differently when using the prototype. In the evaluations of the prototype, we observed that new users suggested more drastic changes than the more seasoned participants. This may be attributed to the sense of ownership the experienced users might have had towards the prototype. However, little else was different between the two types of users.

7 Conclusion

In this paper we explore how a potentially complex home automation system can be controlled through a user interface designed with a focus on simplicity. Further, we wanted to create the solution in participation with the users.

Based on the results we gathered during the final user test, we can conclude that the interface did function well as an abstraction of the home and its devices. All users described that they felt in control of the devices, and that the graph interface worked well as a visualization of the home and its appliances.

Additionally, including the users throughout the process has undoubtedly been an invaluable element in the process of developing the interface. The participants did not only help evaluate the design’s viability and usability, but their suggestions for design and improvement were a definitive driving force in the project.

8 Future Work

In this project we focused on designing a front end solution for controlling a home automation system. During the development of the prototype, suggestions for improvements emerged, and one of the paper’s authors is presently developing the interface further.

For this project, we envisioned a future where the devices automatically connect and communicate by themselves. Another of the paper’s authors is currently working on creating a prototype that allows smart devices to discover each other and intercommunicate using proven Web-technology.