Abstract
Various augmented reality systems have been developed and used in diverse areas recently. However, it is still difficult a user to create augmented reality contents since most authoring systems require 3D objects to create contents. Most users are not familiar with the tools that are used to create 3D objects. As a result, most users only create augmented reality contents with provided 3D objects and it limits the diversity of the contents. In this paper we proposed a picture book-based augmented reality authoring system. A user could create diverse augmented reality contents with 2D objects extracted from captured or provided images instead of 3D objects with the proposed system.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Augmented Reality (AR) provides information by augmenting the real world with virtual information. As smart mobile devices are widely used and AR enabling technologies are stabilized, AR applications have been applied in diverse areas such as entertainment, manufacturing, and education.
AR books have been developed for assisting children’s education [1,2,3]. AR books can help users to understand contents of the book or provide users additional information by augmenting virtual information on the book. AR books have the advantage of boosting user interest and improving user immersion, but they are not yet been used much. Most AR books simply support the experiences of providing AR contents. Few AR books provide ways to author AR contents, but users can only create AR contents with providing 3D models with these authoring systems. Even though the authoring system allow users to import their own 3D models to the system, only few users can create 3D models because it requires knowledge about 3D modeling tools such as 3DS Max and Maya. To overcome this problem, we proposed the picture book-based AR content authoring system that could create AR contents based on 2D objects instead of 3D objects in mobile environment. A user can create AR contents using every objects in the real world by capturing the image of the objects. This is the main advantage of the proposed system comparing with existing AR authoring systems.
2 Proposed Method
2.1 System Overview
The proposed system can be divided into authoring and visualization. The authoring part contains page selection, object creation, and object setting procedures (see Fig. 1). A user selects a page of a book on which created objects will be augmented. Next a user captures an image of a target object and draws an outline containing the target. The system extracts the target and creates a 2D object in the object creation procedure. A user can provide dialogue to the 2D object by adding a speech bubbles. After finishing the scene creation, a user can view the AR content using his/her mobile device.
2.2 Page Selection
The first procedure of the authoring part is the page selection. A user selects one of the pages of a book in the page selection procedure. This page is used as the basis for a new AR content. A user can augment virtual objects created in object creation procedure, which is described in the following section. This page can be selected in two ways. A user can capture an image of the selected page of the book or select one from a database containing images of the book.
2.3 Object Creation
The object creation procedure contains image selection and object extraction. A user can capture a real environment or select an image that contains target objects. Target objects can be extracted from the images by drawing outlines of them. The outlines are drawn by touch interaction on the mobile devices (see Fig. 2). The staring and the ending points are automatically connected to ease of creating a closed region with touch interaction. Each closed region represents one 2D object that will be augmented on the image selected on the page selection procedure.
2.4 Object Setting
In the object setting procedure, the extracted 2D objects are augmented at the desired positions with right sizes and speech bubbles are added to 2D objects to create a new content.
A transformation widget shown in Fig. 2 is used to position an object in the augmented world. A user can select one of x, y and z axes and translate the object along that axis using a single touch interaction. The object is scaled using pinch and spread motion of two fingers. A user can also select the cube located at the end of axis and move it to scale the object along the selected axis more accurately (Fig. 3).
A speech bubble template is appeared on the screen when a user clicks the ‘add bubble’ button. A user can input text on the speech bubble. The bubble is bound by selecting the target object and is located at the default position. A user can move the speech bubble with a 2D position widget along the x and y axes to modify the position of the speech bubble (see Fig. 4).
2.5 Visualization
The visualization procedure allows a user to view the AR content created with the authoring procedure. A user can view the augmented objects and speech bubbles by directing the camera attached on the mobile device to the selected page of the book (see Fig. 5). A user can move around the selected page to view various side of the content.
3 Conclusion
Most existing AR authoring systems support users to create AR contents with 3D models. Users can create AR contents with provided 3D models and their own 3D models. 3D models may provide users more immersive experience but it is difficult to create them. Most users are not familiar with the tools that are used to create 3D models. As a result most users only create AR contents with provided 3D models and it limits the diversity of the AR contents.
This study introduces an AR authoring system that can easily generate various AR contents with reduced immersion feeling. A user can create AR contents with 2D objects extracted from captured or provided images instead of using 3D objects. We conducted a user study with small group of university students and found that users can create diverse contents with the proposed system. As a future work, we will conduct a user study with larger group of people centered on the easiness of creating various AR contents and the comparison between 2D based and 3D based AR contents.
References
Billinghurst, M., Kato, H., Poupyrev, I.: The magic book – moving seamlessly between reality and virtuality. IEEE Comput. Graph. Appl. 21, 1–4 (2001)
Saso, T., Iguchi, K., Inakage, M.: Little red: storytelling in mixed reality. In: ACM SIGGRAPH 2003 Sketches and Application, p. 1 (2003)
Grasset, R., Duenser, A., Billinghurst, M.: Edutainment with a mixed reality book: a visually augmented illustrative children’s book. In: ACM Advanced in Computer Entertainment Technology, pp. 292–295 (2008)
Acknowledgement
This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2017-2016-0-00312) supervised by the IITP (Institute for Information & communications Technology Promotion).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Hong, J.S., Lee, J.W. (2018). Picture Book-Based Augmented Reality Content Authoring System. In: Stephanidis, C. (eds) HCI International 2018 – Posters' Extended Abstracts. HCI 2018. Communications in Computer and Information Science, vol 851. Springer, Cham. https://doi.org/10.1007/978-3-319-92279-9_34
Download citation
DOI: https://doi.org/10.1007/978-3-319-92279-9_34
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-92278-2
Online ISBN: 978-3-319-92279-9
eBook Packages: Computer ScienceComputer Science (R0)