Keywords

1 Introduction

Augmented Reality (AR) provides information by augmenting the real world with virtual information. As smart mobile devices are widely used and AR enabling technologies are stabilized, AR applications have been applied in diverse areas such as entertainment, manufacturing, and education.

AR books have been developed for assisting children’s education [1,2,3]. AR books can help users to understand contents of the book or provide users additional information by augmenting virtual information on the book. AR books have the advantage of boosting user interest and improving user immersion, but they are not yet been used much. Most AR books simply support the experiences of providing AR contents. Few AR books provide ways to author AR contents, but users can only create AR contents with providing 3D models with these authoring systems. Even though the authoring system allow users to import their own 3D models to the system, only few users can create 3D models because it requires knowledge about 3D modeling tools such as 3DS Max and Maya. To overcome this problem, we proposed the picture book-based AR content authoring system that could create AR contents based on 2D objects instead of 3D objects in mobile environment. A user can create AR contents using every objects in the real world by capturing the image of the objects. This is the main advantage of the proposed system comparing with existing AR authoring systems.

2 Proposed Method

2.1 System Overview

The proposed system can be divided into authoring and visualization. The authoring part contains page selection, object creation, and object setting procedures (see Fig. 1). A user selects a page of a book on which created objects will be augmented. Next a user captures an image of a target object and draws an outline containing the target. The system extracts the target and creates a 2D object in the object creation procedure. A user can provide dialogue to the 2D object by adding a speech bubbles. After finishing the scene creation, a user can view the AR content using his/her mobile device.

Fig. 1.
figure 1

System overview

2.2 Page Selection

The first procedure of the authoring part is the page selection. A user selects one of the pages of a book in the page selection procedure. This page is used as the basis for a new AR content. A user can augment virtual objects created in object creation procedure, which is described in the following section. This page can be selected in two ways. A user can capture an image of the selected page of the book or select one from a database containing images of the book.

2.3 Object Creation

The object creation procedure contains image selection and object extraction. A user can capture a real environment or select an image that contains target objects. Target objects can be extracted from the images by drawing outlines of them. The outlines are drawn by touch interaction on the mobile devices (see Fig. 2). The staring and the ending points are automatically connected to ease of creating a closed region with touch interaction. Each closed region represents one 2D object that will be augmented on the image selected on the page selection procedure.

Fig. 2.
figure 2

Drawing an outline to select an object

2.4 Object Setting

In the object setting procedure, the extracted 2D objects are augmented at the desired positions with right sizes and speech bubbles are added to 2D objects to create a new content.

A transformation widget shown in Fig. 2 is used to position an object in the augmented world. A user can select one of x, y and z axes and translate the object along that axis using a single touch interaction. The object is scaled using pinch and spread motion of two fingers. A user can also select the cube located at the end of axis and move it to scale the object along the selected axis more accurately (Fig. 3).

Fig. 3.
figure 3

Translating the object with the transformation widget

A speech bubble template is appeared on the screen when a user clicks the ‘add bubble’ button. A user can input text on the speech bubble. The bubble is bound by selecting the target object and is located at the default position. A user can move the speech bubble with a 2D position widget along the x and y axes to modify the position of the speech bubble (see Fig. 4).

Fig. 4.
figure 4

Adding a speech bubble

2.5 Visualization

The visualization procedure allows a user to view the AR content created with the authoring procedure. A user can view the augmented objects and speech bubbles by directing the camera attached on the mobile device to the selected page of the book (see Fig. 5). A user can move around the selected page to view various side of the content.

Fig. 5.
figure 5

Viewing example

3 Conclusion

Most existing AR authoring systems support users to create AR contents with 3D models. Users can create AR contents with provided 3D models and their own 3D models. 3D models may provide users more immersive experience but it is difficult to create them. Most users are not familiar with the tools that are used to create 3D models. As a result most users only create AR contents with provided 3D models and it limits the diversity of the AR contents.

This study introduces an AR authoring system that can easily generate various AR contents with reduced immersion feeling. A user can create AR contents with 2D objects extracted from captured or provided images instead of using 3D objects. We conducted a user study with small group of university students and found that users can create diverse contents with the proposed system. As a future work, we will conduct a user study with larger group of people centered on the easiness of creating various AR contents and the comparison between 2D based and 3D based AR contents.