Keywords

1 Introduction

With advances in three-dimensional (3D) content and user interface (UI) technologies, the number of information services with which users can experience virtual 3D objects and scenes are constantly growing [13]. In this paper, we refer to this type of services as the realistic experience service. To successfully provide realistic experience to users, the service should be well designed in terms of high user satisfaction, ease of use, and good. Therefore, it is important to apply user-centered UI/UX design principles when developing realistic experience services [4, 5]. Among such realistic experience services, we apply a virtual fitting service as shown in Fig. 1. This service is enable to check size and style by using the intelligent mirror looks like wearing real clothes.

The proposed dynamic interaction refers to interactions that require the efforts of a system for the human, this concept is the beyond the concept of human-centered interaction, in this paper. In the dialogue between human and human, it should realize smooth communication, under the following consideration; both situation and the state of the partner. Likewise, if the information system that supports a number of activities of the person, also can estimate the user-conditions such as the thought and the situation then it will be more interaction better fit the needs of the user [13]. Therefore, out of the conventional command-response model, the dynamic inter-action model, which provides an appropriate and effective information to the user, by inducing the user’s interest, and estimating the psychological state of the user, is required.

Fig. 1.
figure 1

An example of the AREIS applied to shopping

This paper proposes an intelligent system with focus on these proactive interaction. In particular, a lot of UI/UX researches based on a visual cue has recently been conducted [412]. In this paper, it presents a way not only provide more friendly but also enhance user’s interest for UI/UX using the visual cue for an augmented reality experience service. Thus, this paper presents an intelligent information system both which induces the user’s interest based on a visual cue, which is suitable on an augmented reality experience service, and which provides into intuitive UI/UX [1317].

2 The Proposed System: AREIS

The proposed AREIS consists of four stages: user information processing module, experience item processing module, interactive UI processing module, and experience contents service module. The I/O flowchart of the proposed system is presented in Fig. 2. Once a user starts controlling, the system continuously observes, recognizes meaningful actions among user’s behaviors and estimates user’s intention. Simultaneously, the system is displaying virtual fitting results through mirror-type kiosk, the fitting results show dressed like a real effect in accordance with the user’s point of view.

2.1 User Information Processing Module

This module generates a user avatar matching the measured user’s body, and tracks the user’s motion using Kinect sensor data, webcam data and calibration data. In addition, this module extracts the user features such as gender, age, body and style.

2.2 Experience Item Processing Module

This module creates 3D geometry data based on the sensor and calibration data, and generates digital experience items with the 3D geometry acquired. Here, the digital experience items mean objects that you can experience 3D content such as clothes, bags.

Fig. 2.
figure 2

The I/O flowchart of the AREIS

2.3 Interactive UI Processing Module

With this module as a core module, we propose to interaction modeling based a dynamic-interaction method. To interface interactively, this module observes user’s behavior, recognizes user’s action, and estimates user’s intention. This module has the following advantages:

  • This module is adaptively providing users into intelligent and convenient information with a dynamic interaction method. This method performs at the same time both reactive interaction and proactive interaction, between the user and the system. Through this function, users can interact with the system in an easier and more convenient way.

  • This module increases both the user’s interest and understanding of the system operation, by providing service information relevant to the user’s experience level. For instance, the module could estimate the user’s level of understanding and interest in the serviced information, and provide customized service to each user based on this information.

This module helps users to easily control the system using interactive gestures that are defined using gesture classification and clustering technology based on user study.

2.4 Experience Contents Service Module

This module presents visual content on the screen including the user’s avatar and the experience item, according to the measured user’s body, motion, and recognized gestures. This module also visualizes the dynamic UI/UX design based on the user characteristics, and they are displayed on a mirror type display.

3 Dynamic-Interaction UI/UX Design

To develop an intuitive and familiar interface, we design a dynamic-interaction between user and system. In this paper, we propose an idea of interaction modeling, which can be represented by the following scheme:

$$ Interaction\;Modeling = Observation \oplus Recognition \oplus Estimation, $$
(1)

where ⊕ implies dynamic interactions among the functional modules. That is, we define an intelligent information system with observation, recognition, and estimation capabilities and regard these three functions as fundamental submodules to realize dynamic interactions between the user and the system.

By integrating observation, recognition, and estimation, various dynamic information flows are formed. In our model, reasoning implies the function which dynamically controls such flows of information. With the proposed interaction modeling, the system provides information to identify the user’s actions and intentions. At this time, the system adaptively performs reactive or proactive interaction based on user’s reaction of the provided information. Figure 3 shows the proposed dynamic-interaction between human and information system.

Fig. 3.
figure 3

Dynamic-interaction between human and information system

3.1 AR Based 3D-Experience Service

To implement the proposed intuitive and familiar interface with the dynamic-interaction model, the proposed AREIS performs proactive interactions of the 3 types;

  1. 1.

    In order to attract customers for shopping, the proposed system observes both position and direction of the customer’s face, using the extracted user information from images captured by Kinect and DSLR sensors. Thereafter, the system induces the customer to the system based on a visual cue approach.

  2. 2.

    To measure the customer’s body size, the system induces the customer to the measuring position based on a visual cue approach.

  3. 3.

    To control the system using the customer’s hand, the system induces the customer’s hand to the button on the display based on a visual cue approach.

To increase the effectiveness of the more visual guidance, we were used UI-agent in these three proactive interactions. Figure 4 shows these three proactive interactions. For the experiments, we was implemented using character in game system by one applicationFootnote 1. For shopping not games, this technology can be incorporated into a suitable UI-agent according to the mood of the shopping center in the shopping center.

Fig. 4.
figure 4

Proactive interactions of the 3 types using UI agent based on visual affordance

3.2 Experimental Environment and Analysis Results

For the AREIS, the hardware system setup is formed to provide virtual mirror style visualization. The proposed system consists of a 70-inch full HD display combined with the mirror (mirror reflection: 83 %, the panel transmission: 33 %). The screen shows the graphics generated by a desktop PC (personal computer) running software developed for the AR based 3D-experience service. The test bed is equipped with three imaging sensors: a web cam, a DSLR and a Microsoft Kinect depth sensing camera. The Microsoft Kinect sensor is the main imaging device that is used for tracking users’ motion and gesture recognition. While the Kinect sensor also has a RGB color camera, the resolution of the video stream has low quality to fit in a portrait oriented screen. To use higher resolution image as a video background on the screen, the web cam and DSLR can be optionally used with proper calibration between the imaging sensors. We planned to develop the proposed system on the PC platform running Microsoft Windows 7 operating system, using the Unity 3D v4.5 game engine. The main visualization software is developed as a Unity 3D project, while its integration with the gesture interaction module is held through developing a plug-in for Unity. The plug-in is developed using Visual Studio C++ 2010 with Microsoft Kinect SDK v1.7. As of the depth sensing camera, we use Kinect for Windows (v1) which provides 1080 × 1920 resolutions of RGB image stream, as well as 640 × 480 resolutions of depth image stream.

Using an user-study, user the results of the satisfaction analysis based on user’s expressions and subjective questionnaire ware as follows; the proposed dynamic interaction model enhances an understanding of user’s both intention and inclination, also improves the usability including ease-of-use.

4 Conclusions

This paper describes the idea and goal of interaction between human and intelligent information system for a realistic experience service. The overall results show that the proposed user interface concept of using software UI agents as visual affordance cues for gesture interaction with large screen displays is feasible, and it could be applied to public information displays that need to more engage with potential users. The results from the user study suggest using UI agents as visual guides for gesture based interfaces could be beneficial to emotional side of the user experience, yet needs careful consideration of the type of application and the design of the virtual character used as a UI agent.