Keywords

1 Introduction

Human movements are a physical representation of the way people perceive a space, a product and more in general, their world. We interact with products moving around them, touching them, with bodily expressions that are totally subjective and reflect our personal experiences. The movements we perform to interact with an object define the boundaries of a space where subject and object are interrelated (i.e. the space of interaction). The notion of spatial interaction refers to a spatiality of sensation, which should be distinguished from a spatiality of position [1]. Thus, human movements are interesting for the design domain in their ability to embody the user experience, offering a rich amount of knowledge on the human-product interaction. Traditionally, the study of human postures and gestures has been carried out with a quantitative approach for ergonomic assessments, to check functional requirements and usability features of products. Product designers tend instead to prefer qualitative observations of users’ movements for their greater feasibility in conditions of limited time and resources. To study human movements quantitatively, Motion Capture is considered the most accurate technique, giving a rich set of information on trajectories, position, orientation and speed of users’ movements. It is commonly employed, coupled with Virtual Reality technologies, at the end of the design process to assess aspects such as ergonomics and usability. However, designers rarely adopt this technique. Surely, the technical skills required implementing a tracking session, the complex amount of data retrieved, and the absence of guidelines discourage designers to make use of Motion Capture technologies. Moreover, the computational form of tracking data results largely inaccessible for designers: they need a flexible and meaningful representation that can be “read” from their perspective. For instance, to effectively support the design process, motion data should be elaborated to suit 3D modelling software tools. No extensive method was found in literature addressing the visualization of human movements for concept design purposes. In this research, human movements are studied for their ability to nurture and inform subsequent design actions. The traditional methods used to observe human gestures will be explored, comparing the quantitative/qualitative approaches. Subsequently, we will reflect on the possibilities offered by Motion Capture technologies, and the limitations and barriers they present to be used in the design process. This paper introduces a method to use Motion Capture technologies with a designers’ perspective, at the beginning of the design process. Finally, we will describe the application of the method on a case study in the automotive design sector.

2 The Study of Human Movements in Design

Human movements are commonly considered a matter of ergonomic studies, to assess products’ usability and fitness to users. In some applications, for example workplaces, health care, design for impairments or transportation design, it involved the study of human postures to create products that do not harm users, but rather facilitate physical well being. Thus, being a matter of health and security, ergonomic studies sometimes could exploit expensive technologies as Body Tracking and Motion Capture to measure users’ anthropometry in relation to the product. In these fields, in fact, a quantitative assessment is necessary to gather a specific and punctual knowledge on a broad spectrum of users, and therefore shape a product that matches users’ needs. This kind of user tests is usually conducted at the end of the design process, when the design concept has already been formed, and it needs to be validated. Motion Capture technologies have been used in several studies to support these validations [25].

Conversely, product designers traditionally prefer to employ qualitative methods to observe users. These methods differ on knowledge claims, strategies employed and nature of data. Qualitative research uses instead a vertical approach to dig deeper in users’ latent desires and needs [6]. In this way, the focus of research is not only the functional requirements of the product, but the holistic user experience [7]. Human movements are therefore approached with a different attitude and they are usually studied through video-recorded user observations. The analysis of these videos provides designers with qualitative insights that heavily rely on their subjective interpretation, and it is often conducted without a structured approach. Yet, the richness of information that comes from a qualitative observation is undoubtedly a source of inspiration for designers, to infer knowledge on the human-product interaction. Thus, the study of human movements can give important insights even in the early phases of the design process. To improve the significance of the user observations, we believe that a mixed-method approach [8] is the most effective, combining quantitative measurements and qualitative observations. The traditionally dualist perspective of quantitative vs. qualitative research faded over the last years, leaving room for the collection of numeric information (i.e. measurable data) simultaneously or sequentially to text information (i.e. interviews, etc.).

Recently, the study of human motions gained attention in the field of Interaction Design. The possibility to capture the embodied experience and use it as the starting point of the design process inspired several studies on the topic, adopting Motion Capture technologies. From an art perspective, the project Bodycloud [9] captured the movements of a dancer to generate a sculpture visualizing his graceful gestures. The work is rooted in the figurative arts, yet it provides a reference framework to understand how the visualization of movement has been tackled in the artistic domain. Another study presented a reverse engineering of human movements for architectural design [10]. The concept of the architectural space was designed as a negative shape of users’ volumes of motions. The grounding work of Hansen and Morrison [11] adopts instead a semiotic approach to organise the properties and peculiarities of human motion data. The core modalities of movement (e.g. the velocity) are related to their specific characteristics (e.g. the speed) and to the corresponding best visual description (e.g. the size of mark) in a Movement Schema [11]. In this way, the richness of motion data is organised according to semiotic criteria, to be represented in a meaningful visualization. This approach can be considered the first step to make sense of human motion data and integrate them in the design process. However, literature does not offer a comprehensive method or structured guidelines to support designers facing the complexity of motion tracking. From the studies analysed, it emerges a necessity to preserve a degree of interpretation from the designer, not over-imposing meaning to the visualization of movement data. Furthermore, there is no clear agreement on which modalities of movement should be represented and, more important, how they should be characterised visually. The approach suggested by Hansen and Morrison [11] seems the most valid. Yet, we argue that designers need a representation that fits their traditional design skills, so that they can use them as reference to shape the concept design. In the next section, we present a method to capture human movements with a mixed-method approach, combining quantitative and qualitative observations of the user experience, to generate a 3D representation that can be used as the starting point of the design process. The method has been applied in a case study in the automotive design domain.

3 The Method

The method presented here couples Motion Capture techniques with traditional qualitative observation of human movements. Motion Capture technology offers a large number of benefits but also some limitations. The information retrieved through these systems must be processed in order to effectively support designers. A tracking session needs a careful setup, expensive equipment and technical skills that are unusual for designers. Moreover, the data generated with Motion Capture are usually largely inaccessible for designers due to their computational form. They are often presented as a complex aggregate of numerical data, which are difficult to interpret. This research is a first attempt to face these issues and make the use of Motion Capture techniques more accessible for designers. Figure 1 presents an overview of the method that we are now going to illustrate. The first thing designers need to understand is the focus of their study: basically, what they want to record with Motion Capture systems. To effectively define the specific focus of the experiment and to identify the ‘key phenomena’ to track quantitatively, we suggest relying these choices on some preliminary user observations. In our approach, we recommend a qualitative user observation in field-research modality, using video recordings, interviews, questionnaires, etc., to identify the critical issues of the user-product interaction. More generally, designers should implement a quick user observation adopting the traditional methods and tools employed for co-creation and participatory design [6], according to the specific design case. The most common and effective method to collect rich records of user experience is the “recall and describe” [12]. Participants to the study are first video-recorded while performing an action (e.g. trying a product or going through their morning routine). Soon after the task has been completed, they are asked to look at the video and comment on their subjective experience, while the interviewer asks them specific questions. The results can be then analysed to establish which are the most relevant issues to investigate in the subsequent tracking session.

Fig. 1.
figure 1

Outline of the method and the output for each step

The second step of the method (Fig. 1) involves the setup for the Motion Capture testing phase. At this stage, a number of choices must be taken. First of all, designers need to consider, according to the specific design problem, which areas of the human body to track. In our method, we defined eight areas of interest (Fig. 2), which altogether outline the boundaries of human body. This step is crucial to understand the output data: the higher number of tracked areas, the greater will be the complexity of the information retrieved. Secondly, basing on this choice, wearable marker-sets must be designed to comply with several criteria. For example, marker-sets must be asymmetric and of three-dimensional shape as much as possible, to prevent occlusions and failures in tracking. In this study, we produced a set of wearables combining a rigid body that supports the markers and a flexible and adjustable strip that can fit any user. Another important issue to face at this stage is the testing environment: Motion Capture systems involve a lab setting, but users need a physical representation of the product they must interact with. This method adopts the Abstract Prototyping technique from Human Centred Design [13] to create a rough and synthetic prototype of an artefact, avoiding realistic details. Abstract Prototyping allows the creation of quick and cheap setup so that users can interact with a prototypical object. As an example, the abstract setup in Fig. 2 reconstructs half of a car’s interiors, to physically limit the users space of interaction. Lastly, in this stage designers must face the Motion Capture system calibration, considering how many cameras are needed, which configurations, at what distance, etc.

Fig. 2.
figure 2

The user in a moment of the test and the relative data visualization

In the third step of the method (Fig. 1), the tests are finally ready to be implemented. Participants are asked to wear the marker-sets and interact with the abstract prototype, while interviewed on their subjective experiences. Basing on the themes discovered in the first qualitative assessment, designers are able to focus on the key phenomena that are meaningful to tackle the design problem. In this method, we suggest the choice of the semi-structured interview technique [14], which enables the interviewer to follow new ideas and paths of research that might come up with users, although still basing him/herself on a pre-determined set of questions. The users’ movements and interaction are tracked with the Motion Capture system and video-recorded as well. Videos are in fact relevant as a reference to refine the motion data, when, for example, some occlusions may occur.

The fourth phase represents the core of the method (Fig. 1). In this step, we provide a method to manage the results of the tracking session and to visualize them in a 3D modelling environment. During the tracking session, a large set of raw data has been acquired. Although several measures can be taken to prevent the greatest part of possible accidents, occlusions, parts to be trimmed, misidentification of markers and other issues can occur. Once refined, data can be exported as a datasheet that gathers numerical information on the tracked movements (Fig. 3). For this reason, data are difficult to manage for designers, who need instead a 3D representation to use them as the starting point of their design process. Moreover, at this step designers might choose to extract only specific information on the users’ motion. This method focuses on the visualization of the trajectories (i.e. position over time and orientation) of human movements. To identify this information in the complex datasheet, we developed a simple software application able associate it to each marker-set and generate sub-files listing the X, Y, Z position of the centroid of the marker configuration for each frame. The goal was here to generate new datasheets that could comply with standard NURBS-based modelling software. Once imported the sub-files into the 3D environment, numeric data are represented as Point Clouds, which represents the first step towards a sensible visualization of data. Yet, Point Clouds have still limited possibilities in terms of characterisation and modifications possible: for example, it is not possible to differentiate participant, tasks and the specific marker-sets associated to each area of the human body.

Fig. 3.
figure 3

Steps of data visualization necessary to achieve a 3D representation in a standard NURBS modelling software.

The creation of Point Clouds yields the construction of curves that show the trajectory of the movement. To generate curves, which effectively represent trajectories, the raw Points Clouds have to be elaborated. At first, due to the high amount of points collected by the Motion Capture system during a single trial, the sampling frequency was reduced from 100 fps to 16 fps. Then the data were imported into a modelling software and various curve-generation methods were tested. Spline interpolation was considered not effective since errors due to the precision of the Motion Capture system make the curve irregular. Consequently, the visualization and the interpretation of the generated spline is difficult. Bézier curve, instead, reduce the effect of the precision errors and the curve appear smoother. In particular, a Bézier curve with degree = 11 was used. However, the curves may still appear redundant in some part of the trajectory. To reduce the number of the redundant curves, the control points of the curve were deceased up to 10 % of the original ones. At this point, all the trajectories corresponding to users’ movements and interactions can be generated in the same way. In order to visualize them in a more significantly, the assignment of a graphical representation reference to each variable is needed. This can be highly correlated to designers’ subjectivity; as an example, trajectories can be represented by small pipes, each participant can be attributed a specific colour, and symbols can highlight the differentiation between themes (Fig. 4). In other cases, the distinction between each marker-sets can be more meaningful than the one on participants, and colours can be distributed according to this criterion. Anyhow, at the end of this phase, designers will achieve a structured visualization of human movements that depict the user-product interaction in a 3D modelling software. Thus, they are able to exploit these data as the starting point of the design process, to start shaping the concept design.

Fig. 4.
figure 4

Rendering of the complete volume of interaction for all users

4 The Case Study

The method presented here was used during a case study developed in collaboration with Design Innovation, a design agency based in Milan, and the R&D department of Fiat Chrysler Automobiles Group (FCA Group). The design agency was commissioned a user-centred research to redefine the car interiors for passengers aside the driver seat. More specifically, the passenger seat is usually designed as the symmetrical counterpart of the driver seat, sometimes even lacking some features (such as the lumbar support). However, driver and passenger have highly different needs in terms of comfort, safety and freedom of movements. The automotive company asked then to generate new concepts of the passenger seat with a special focus on comfort and UX. Following our method, Design Innovation conducted a first round of user observations with 9 participants, 5 male and 4 female, video-recording them with a frontal GoPro© [15] camera and one hand-camera in the back seats. Participants were brought on a medium-length journey in a car (average 40 min), seating on the passenger side, after which they were interviewed about the level of comfort, their needs and their expectations. Through these first results, designers identified five key issues to further explore: (1) the assessment of comfort in posture; (2) the interaction with either people or objects in the back seats; (3) the placement of personal items, such as bags, coats etc.; (4) the interaction with smart devices; (5) the users’ perception of the space. These problem areas were established as correlated to alterations in posture and gestures. As in the second step of the method, the test setup has been implemented by building the Abstract Prototype of half a car interior, to reconstruct the car space around the test participants and to physically limit their space of interaction. In this setup, the longitudinal left side of the car was considered to prevent the Motion Capture system from visual occlusions, which would affect the capture results.

The tests were conducted using a Motion Capture system based on 6 Flex 3 cameras by OptiTrack© [16]. The cameras were placed at a height of 220 cm, equally distant from each other. The human movements have been tracked using 8 wearable marker-sets composed of a rigid part mounted onto a flexible strip. For each marker-sets, a different configurations of retro-reflective markers was arranged, changing the number and disposition to prevent misidentification and tracking errors. The Motion Capture system was capturing the position and orientation of each marker-sets, corresponding to the selected parts of the human body. In the Motion Capture system, Rigid Bodies are defined as clusters of reflective markers in a unique configuration, which allows them to be identified and tracked in a cloud of 3D points. It is possible to track multiple Rigid Bodies at a time in full 6 degrees of freedom (position and orientation, 6DOF). The shapes of the marker-sets were chosen to maximize tracking capability. Spherical reflective markers were preferred, as they guaranteed the most stable and accurate 3D tracking. Markers were arranged in asymmetrical and unique configurations to reduce the likelihood of misidentification and swapping.

In our study we selected 9 participants (5 female, 4 male, age 25-52). They have been informed of video recording and told about the test goals and objectives. The participants claimed to be at ease with the wearable devices and the researchers could notice that after few minutes of testing, people tended to forget about cameras and markers, focusing on their own experiences. The test was split in two phases: during the former participants were asked to recall one meaningful experience as a passenger in the car. This first phase relied on the Open Interview technique. The second part of the interview was instead using a semi-structured approach, following a set of pre-determined questions to tackle the problem areas defined in the pilot study. Participants were interviewed on their subjective perceptions of the car interior and to recall and describe their personal experiences as passengers. While doing so, they were asked to show the positions they assume in the car and the movements they perform, for example, to grab their personal belongings in the back seats. During the tests, we tracked the users’ movements and gestures inside the abstract set-up. The 3D capture of their movements generated a set of human motion data, from which it was possible to obtain a volumetric 3D model representing the (desired) space of interaction for passengers.

As in the fourth step of the method, the raw data needed to be processed before taking shape in the 3D modelling environment. We performed all the operations described at this stage to discard any error occurring during the tests, and we exported the datasheet including the whole aggregate of information generated in the tracking session. Through the software application developed specifically for this method, we were then able to extract the eight sub-files listing the X, Y, Z position of every marker-set for each tracking session. At this stage, we were then able to import the sub-files in a NURBS-based modelling software and to create the associated Point Clouds. Following the last steps of the method, it was possible to achieve the visualizations in Fig. 4. The complete 3D model gathering the data of every user, some specific visualizations and the qualitative analysis of the interviews was submitted to the designers and the R&D department to start and inform the design process.

5 Results and Discussion

The amount of data obtained in the case study, coupling Motion Capture technique with qualitative observations of users, supplied information on the trajectories of the human movements and the areas where users interact the most. This will give the possibility to shape new concept of the passenger seat as a negative model, using the data as a starting point for the design process. The interview results provided other interesting insights, outlining some critical issues for every task. These suggestions reflected the users’ movements in the corresponding 3D visualization (Fig. 4). Most of the results confirmed the insights collected in the first user observation, yet they provided deeper information. For instance, the tests showed the need of a greater flexibility in the movements, highlighting the participants’ willingness to interact with the back seats, especially in presence of children, pets and, in general, for long journeys. They showed also to be uncomfortable in their postures, especially with their legs and arms. They claimed to perceive the need of a flexible lumbar support and more space for legs, as well as they would appreciate to have armrests. The design team was then asked for feedbacks on how they would use the complete package of results coming from the user tests. Through a questionnaire, they asserted the significant value of the method in the former stages of the design process, to inspire and inform them. The 3D nature of data was specifically found as an interesting standpoint to design the seat “as a negative shape”, making the “form follow the data”. In this way, “the design of the style is based on solid, reliable data, merging effectiveness and style”. Yet, they also suggested some improvements of the methods. For instance, they claimed the need for an interface to navigate through the several variables, as well as the possibility to explore other information on the users’ movements (e.g. “sudden changes”; “the time spent in a certain posture”; “the frequency of a specific gesture”). Moreover, they would largely appreciate the integration of a mobile system for Motion Capture, and the absence (or at least flatness) of markers.

As shown here, the results of this first case are encouraging, although the method still presents some limitations. However, this study clearly showed the possibility to use Motion Capture systems as generative tools able to inform the design process and stimulate the generation of design concepts, rather than simply validating them. What is particularly interesting for designers is the chance to have a flexible 3D representation of users’ movements, in a modelling environment they commonly use in their design process. This makes the data generated much more useful and appreciable for them, as designers can directly exploit them as the starting point to shape the new product. Our method, providing a step-by-step guidelines and a careful description of every action, supports designers who wish to integrate a semi-quantitative approach in their user observations. Obviously, some technical limitations are still present: as mentioned before, the system calibration, the design of the wearable marker-sets, and more in general the test setup may heavily influence the delicate phase of capturing human gestures and generate unexpected errors in the tracking session. Other difficulties lie in the intrinsic nature of Motion Capture systems: for example, to suit better the traditional approach of designers, a portable toolkit could give unexpected possibilities. Lastly, even if the method simplifies the transition from computational data to their 3D representation, a user interface that allows a greater integration between NURBS-based modelling applications and Motion Capture system could largely increase the chances of adopting this technology for user observations in the design process.

6 Conclusions

In this paper, we presented a novel method to use Motion Capture systems to inform and nurture the design process at the early stages. This method exploits the richness of information that can be gathered in a tracking session, to generate a flexible, three-dimensional representation of human movements in a modelling environment, so that designers can use it to start shaping a new product. The method describes a step-by-step guideline to implement a tracking session and couple a quantitative observation with qualitative interviews, following a mixed-method approach. In this way, human movements can be used to infer users’ personal space of interaction with a product and give important insights for the design process. Adopting this method guarantees the possibility to extract the raw data of the tracking session to refine them and to generate a meaningful visualization for designers. In this paper, we presented one case study in the automotive design domain. Meanwhile, all the products involving a spatial interaction could be potentially suitable to test the method, since, in these cases, the study of human gestures acquires more value. More specifically, other case studies can be conducted easily on other transportation means, such as airplanes or trains. Other fields, as for example the design of home/office stationeries, furniture, and workstations can be also a source of interesting applications. Lastly, many sports can be an interesting field of application. As a conclusion, we argue that the issue of visualizing motion data for the design process has been sufficiently addressed by this study. Future studies can instead focus on designers’ viewpoint, to understand how they can exploit this kind of data and the subsequent design actions.