Keywords

1 Introduction

According to Schramme [1] there are several definitions for naturalness. We form our individual feeling for naturalness while growing up. The same applies to our personality which is partly shaped by our experience and society. In this context, we define natural or rather intuitive interactions as the self-explanatory or usual actions. Persons in modern society learn handwriting from early ages. It is considered a natural interaction as we do not have to think about how to use a pen anymore. In comparison with using a keyboard, writing with pen and paper supports our motoric memory, which leads to reinforced memories [2]. Changing circumstances also affect our impression for naturalness. Smart environments within the internet of things should consider this in order to fit in our own perspective and embrace everyday objects. Yet, it is important to have a clear service performance in order to control our environment. There must be some central contact point who takes action when assignments are coming together or overlap each other.

Fig. 1.
figure 1

Working prototype with an external projector and drawing tablet.

This work provides a novel interface approach of a device within a smart environment (see Fig. 1). Our concept addresses the issue of how the internet of things deals with naturalness and personality and the role of the user within the system. Section 2 gives an overview about used methods, existing tools for self-management and devices for working effectively. We outline their limitations and compare them with each other. Our proposed device section, (see Sect. 3) explains our conceptual approach which includes hardware and software. We give detailed information about the main interface, gestures and functions in terms of planning and self-reflection. In Sect. 4 we present our proof of concept prototype and how to interact with the device. The last section (see Sect. 5) gives an overview about testings and results of our proposed device. We conclude with future work, improvements and possible extensions.

After conducting research and qualitative interviews we define our target group as digital nomads. Those self-employed people are working at several places and execute the majority of their tasks by themselves. Most of the time they are working on the move. Typical working environments are cafes, coworking-spaces or at home. Independence and self-realization are named as main reasons for self-employment.

Our research and interviews showed that main problems are underestimation of planning and organization. Most of the interviewed people neither set goals for upcoming months nor reflect work periodically [3]. Often, there is no interest in this, because it is seen as complicated, time-consuming and annoying. However, most of the interviewed people did not have knowledge about using specific methods. Furthermore, there is a lack of clarity in tasks and task prioritization. No fixed working hours have the effect of unbalanced working days and too less breaks [3].

2 Related Work

To create an overview of the topic and the state of the art, we analyzed fitting self-management methods. We conducted interviews and surveys with people from the target group. Furthermore, several self-management methods were discussed with selected experts. In addition, we evaluated existing apps with regard to function, interaction and usefulness. To get an overview of the technical possibilities, we searched for devices with a combination of digital and haptic interfaces, which offer a great experience in terms of handwriting.

2.1 Methods

We found out, that reflection is an underestimated aspect for self-employed people (see also Berger et al. [3]), which should be done periodically. Self-Management is a method to plan our own behavior, while using concrete strategies [4]. We found several methods for planning and reflecting, all with different purposes and strategies. Therefore, a single method cannot be a solution for every situation. Approaches like the ‘ABC’-method [5] or ‘Eisenhower’-method [6] are useful for a quick categorization, but need to be reconsidered in more complex situations. The ‘ALPEN’-method [6], ‘Importance-Urgency Mapping’ [7] or ‘Get Things Done’-method [8] are more complicated approaches, but give a better overview and are more precise in handling tasks. To get an even better understanding of self-managing, we worked together with a management consultant. He help us defining important aspects: plan every day, have long-term and short-term goals, split up tasks into single goals, define tasks, that they can be done between five minutes to three hours and reflect periodically on your goals. Afterwards, we designed and selected methods for our final approach.

2.2 Tools

We selected and studied different existing tools and apps for desktop computers as well as for smartphones. Applications were categorized into two sections: ‘Trello’ [9] or ‘Notion’ [10] have benefits in the range of functions. They have several features to fit different needs, like dragging tasks all around or having a completely individualized interface with different features. But these benefits are problems at the same time. This overload of features is making those applications confusing. Without knowing self-managing-methods or approaches to organize tasks, these tools are kind of useless. Applications like ‘Ike’ [11] focusing on serving a specific method. In this example it provides the ‘Eisenhower’-method. It has a clear interface, which helps the user to understand the method. Still, it creates limitations. The method itself is not working in every scenario. Therefore, it is not a tool for every day use. Working with just pen and paper gives the opportunity to get both, freedom of individualization and a clear method, especially with specific printouts for methods. Nevertheless, there are no digital advantages. As a result we defined that our device has to combine digital and analogue advantages, which can illustrate methods if needed, but without distracting the user.

2.3 Devices

We differentiated between touch projectors and smart pens in order to compare it to our approach. Easy usage is probably the most important aspect for the user. Smart pens such as ‘Moleskine Smart Writing Set’ [12] or ‘Livescribe 3’ [13], are functioning ink pens which transform analog writing to a digital device. They feel less like a piece of equipment but more like an habitual writing implement. After writing, they sync via bluetooth with an iOS or Android device and convert easy handwriting to text conversion. A contrary indication is the fact that they all require a special paper for the recognition. There are other devices, as the ‘Equil Smartpen 2’ [14] or the ‘Wacom Inkling Digital Sketch Pen’ [15] which use regular paper, using a clip-on receiver to record writing or sketching. Furthermore, there exist the ‘Apple Pencil’ [16] for the iPad Pro which goes directly from stylus to screen but can lead to distraction by getting unrelated notifications. Apart from smart pencils, there also exist short distance projectors with multi touch function like ‘Benq MW883UST’ [17]. Their disadvantage is that they are stationery and unwieldy. Other existing touch devices, for instance the ‘Sony Xperia Touch projector’ [18] is smaller and can be used for different scenarios but doesn’t take personal preferences and the environment into account.

3 Proposed Device

We present ‘Selv’ as an independent smart device, which is intended to help encourage productive work life. It offers a focused way for self-management and self-reflection, while making progress and success visible. In our case, it includes goal setting, planning of visions and future tasks, as well as keeping an overview of existing tasks. Not every self-management method works with any person or in any environment. As a result, the system should be adaptive and guarantee no distraction caused by the device itself.

3.1 Concept Device

Our proposed stand-alone device is transportable and can be used in any kind of desk-work environment. In order to do so, it is hand-sized and robust. It offers a clear form language, which refers to a harmonious and stable, but also slightly irregular shape. The overall appearance is not too technical and courteous (see Fig. 2).

Fig. 2.
figure 2

Visualization of the form model and its activation process: (a) Working and (b) sleeping mode.

The connection between analogue and digital components simplifies focusing on working and personal targets. The concept device has an integrated short-distance projector and a pen. The projected surface works as an unobtrusive interface, while the pen is used as intuitive interaction tool. ‘Selv’ has a capacitive surface all around, which turns the projection on and off while the user touches it. The light-intense laser beamer inside the device projects on a flat desk surface. To detect the three-dimensional position of the pen, two infrared sensors are used. In addition, the pen includes an infrared LED at its head. ‘Selv’ also includes a battery for better transport, a speaker, detection sensors, LED’s as well as its own processing unit. Depending on the interaction of the user, for instance if he succeeds finishing a task, there are several light and sound states. The pen and the projection surface are forming a single unit with visual and auditory feedback functions.

3.2 Main Interface

Instead of a classical graphical user interface, objects as buttons or sliders, we propose to recognize interactions and text input with the stylus. Grabbing the pen as well as touching the device is the trigger to signalize when the projection should turn on. Consequently, after not using the device for a while or touching it again the interface will turn off. The graphical interface is minimalistic and reduced in terms of design elements. It is mainly defined by the user and his handwriting. When using the device for the first time, users should get a feeling for the interface by sketching roughly. After defining goals, we used a method where the duration of tasks are reflected by the visual space on the interface (see Fig. 3). During the reflection process, the interface stays almost completely empty so the user can concentrate on their own thoughts.

Fig. 3.
figure 3

Representations of interactions with the pen and the device: (a) Defining Goal and Deadline; (b) Turning on the device by touching it; (c) Doing habitual gestures; (d) Reflecting on tasks done.

3.3 Gestures

In order to guarantee natural interaction, ‘Selv’ has the ability to learn habits in terms of writing gestures as marking or confirming text. In our interviews we found out, that people use diverse symbols when they use pen and paper. There were basically three different functions of used symbols: Erasing/deleting a task, marking as done (confirm) and selecting. However, the interviewed persons used various symbols for the same functions. For example a cross was used as deleting as well as selecting. As a result, we define that ‘Selv’ has to save individual gestures for every user. Consequently, ‘Selv’ asks the user to do gestures once in the very beginning. Gestures are working at any time and are always connected with their assigned function. Therefore, we can ensure everybody has a personal and natural way of interacting. Those gestures will be recognized using machine learning. Instead of recognizing the finished image, it has to be the movement of the pen, which will be recognized. Thereby, all kind of gestures can be used. To ensure that users do not have to repeat gestures several times, there should be a library, which saves all kinds of gestures from all users on an external server. If the user does his gesture for the first time, ‘Selv’ can compare it with already existing ones. If it is completely new, ‘Selv’ will ask to do the gesture minimum three times.

3.4 Functions

Planning. Having an overview about future goals and tasks is really important for planning in advance. Every task is sorted into short term goals, which have a deadline. At best, the user should plan every day in the morning before starting to work by setting up and selecting tasks for the day. ‘Selv’ is supporting the user by warning him as soon as there are too many tasks selected. As every task has a rough time duration, it shouldn’t exceed more than eight working hours. As an additional help for the users to select the right amount of tasks, we created ‘theme days’, which help the user to orientate himself by estimating how he should start the day. ‘Selv’ is analyzing tasks, deadlines, progresses in the last days and also outside influences as sleep data and breaks to give the right support (see Table 1). We assume that our device is connected to our phone an other devices within the smart environment.

Table 1. Overview about theme days.
Fig. 4.
figure 4

Overview of connected devices within the optimal environment.

A power day means that the user has a lot to do and is well prepared for a long productive working day. The relax day describes that the user has done a lot of tasks, has a lack of leisure and should concentrate only on a few tasks. In contrast, the motivation day represents a day where the user has a lot to do, but does not have done a lot lately and is not well prepared.

Reflection. A major part of reflection is to underline progress and success. Therefore, every morning and evening ‘Selv’ starts with the success of the last 24 h. This gives users a positive start, motivation and awareness. These little notes are already helping users to reflect subconsciously about tasks done. A more conscious reflection is provided by a continuous reflection phase at the end of the last working day of the week. Instead of just show tasks done, users have to evaluate and compare them with their long-term-goals. Still, this tasks should be easy, understandable and performed in a short amount of time. Just by crossing or ticking every task, the user defines if the task helps him reaching the long term goal or not. It does not have to be a specific gesture to keep users in the flow. Afterwards, ‘Selv’ asks specific questions about their progress. Questions are for instance: ‘What worked very well?’, ‘What are you proud of this week?’, ‘What could you change next week?’. Those questions are based on the individual progress. Using machine learning, they are categorized based on how many tasks were done a week and how many of them were helpful for long-term-goals. This complete phase will take about ten minutes and should not be longer. Otherwise users could get lackadaisical.

3.5 Selv and Environment

To guarantee that smart environments work in our case, we established that we need a central contact point, which can influence and control surrounded devices (see Fig. 4). ‘Selv’ sends user input through the central contact point to other objects like the computer. Furthermore, ‘Selv’ responds to received data and integrates it into its behaviour for instance sleep data, ambient noises or locations. Yet, users have the ability to control everything while having one main device between all other objects.

4 Prototype

The model of the prototype was build out of a hollow plaster form. To ensure precise detection and testing results within the prototype, we use an external beamer placed above the device, a graphic-tablet-pen and an appropriate graphic-tablet, which is positioned under a solid surface. The recognition of the pen is possible through thin surfaces, in our test maximum 7 mm. Moving the projection of the beamer in the same position of the graphic tablets surface, gives the illusion of a touchscreen (see Fig. 5). In addition, we connected ‘Selv’ to a computer to make use of the speaker and battery. A small processing unit inside of the ‘Selv’-model is controlling lights, sounds and sends user inputs of touching or tilting (see Fig. 6).

Fig. 5.
figure 5

Construction of the prototype.

The software itself is operated by the computer as a web-application. It provides the complete main-interface with writing short-term tasks and goals, setting up long-term goals, defining personal gestures, simulated motto days and reflection with simulated questions. The text-recognition is provided by the machine-learning API ‘MyScript’ [19]. This API is also used for the gestures-recognition. They are recognized like symbols or letters. Therefore, not symbolic gestures like wild scribbles are not supported so far. The well trained algorithm of the API makes it possible that the user has to enter his gesture just once. A small local database makes it possible to save data through longer time periods, even after shutting down the device. Navigating through tasks has to be possible through the pen, because it is our only screen based input. This works by pointing the pen at the corners of our projection. It stops if the pen is back in the center of the projection. This setup works as a tactile and visual prototype without obvious limitations. As a result, we could realize an understandable presentation and realistic testings.

Fig. 6.
figure 6

Technical specifications of the prototype.

5 Experiments and Results

Our main goals in testings were to find out if people understand the device as a whole. This includes, if people remember interactions and gestures, understand methods and if the overall interaction will be described as ‘natural’. For this, we collected feedback and did some testings at conferences and presentations using the thinking-aloud-method. Conferences were besides other ‘ThingsCon 2017’ at Amsterdam [20] and the ‘Future Convention 2017’ at Langen (Frankfurt Main) [21].

Moreover, we did a technical test for a pen input without a graphic tablet. Therefore, we built an infrared pen and used a Wii-Remote as a receiver. The Wii-Remote was positioned directly above the projector. After a small calibration with a plug-in, the pen is recognized if the LED illuminates. Even if the technical part of this test was worked out, the result of the feeling was imprecise. Lines and handwriting were not as accurate as with a normal pen. Other projects have already shown that this technical part is possible even without inaccuracies. Thence, we decided to focus on a quick and working prototype for non-disturbing and realistic testings.

The first thing we noticed was that nearly all visual spaces defined were to small to write on. Users used to overwrite those fields, which ended up with impairments. Bigger visual spaces would not be a solution. We observed that many users prefer to write anywhere without limits of space. So unlimited sizes and no space boundaries could improve the experience. A further issue was the navigation. Pointing the pen at the corner of the nearly invisible projection-frame was too complicated for all participants. It was not natural at all or even familiar with any other interaction users could know from other devices. In the beginning most of the users thought the surface is touchable with their fingers. Thus, it would make sense to use fingers for navigation, comparable with moving a piece of paper on your desk. Also, it would be handy to reduce the need of scrolling as much as possible.

Long term studies in terms of reflection and methods were not possible, because the prototype is not in a portable state yet. But the probands understood the interactions pretty well and there were no great problems. Most of the users enjoyed answering questions on a completely empty space. Nearly every participant understood the applied methods and could use it directly after a short introduction. Nevertheless, some had problems to understand the definition of a (small) goal and did not know what to enter. Gestures were remembered perfectly and nobody forgot how to draw an individual gesture. In general, we got positive feedback. Participants most often used the term ‘magical’ to describe the experience.

6 Conclusion

Smart environments offer a high potential to improve intuitive and personal interactions in our everyday life. In this paper we have presented a novel interface approach for better self-management and reflection. We discussed methods, tools, existing devices and our derived conceptual approach. As a result we have created a concept of a self-enhancement method to increase user productivity through using task organization and self-reflection, combined in a transportable device. Intuitive interactions are provided through gestures, handwriting recognition and a projection surfaces. We have shown testing results in terms of interactions by using an own developed proof of concept prototype. The projection surface as a customized interface in combination with the pen offers enormous potential for versatile use. Targeting self-management and reflection, other work approaches as doing drawings, having a direct calendar, working on presentations or collaborative can be considered. Particularly, it can be thought of to apply different management methods for users individually. Therefore, we want to conduct are more detailed user study. Another future goal is to expand the functioning by using user context awareness. This could be done by analyzing work behaviour of users in combination with sleep data or the calendar to give personal prognoses for upcoming days. We have shown that intuitive and personal interfaces can create a more user-centered and comprehensible interaction.