Abstract
This research proposes the development of a virtual reality system in which multiple users could collaboratively design and edit 3D models. In the virtual environment, users also obtained the rendered version of the actual environment using the Kinect’s point clouds. The proposed system provides several features of model building, coloring, realistic dimensions of actual environment, user’s collaboration, and STL file exporting for 3D printing.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Computer-aid design (CAD) software has played a crucial role in the material or product design process. However, due to the limitation of the software, the CAD software lacks clarity in the visualization of the 3D model and good collaborative features. In collaborative work, users need to share knowledge and idea and avoid misunderstanding problem. Also, there is a difficulty in collaborative designing and editing models if model designers are in different places. The users can collaborate on the system by using the multiplayer high-level API (HLAPI) with Unity technology which is a cross-platform game engine. The engine can be used to create two-dimensional, three-dimensional, virtual reality, and augmented reality for games and the other experiences [1]. Furthermore, the HLAPI is a set of networking for building multiplayer capabilities for Unity [2].
The virtual reality (VR) is a technology that simulates the environment and allows users to interact with the three-dimensional virtual environment by using the headset and controllers. This helps users to reduce risk and have more understanding and effective communication. The implementation of the application can present on the VR headset or head-mounted display (HMD), VR controllers, and LED display. The headset is responsible for mapping the display to a wide field of view [3].
This paper presents the development of the multiuser system for designing and editing three-dimensional models in virtual reality with HTC VIVE and Microsoft Kinect 2. The Kinect is a depth camera consists of an RGB camera, an IR emitter, an IR camera, and the depth measurements are using the speckle pattern technology [4]. The system has developed using Unity3D and C# language with Microsoft Visual Studio. The proposed system explored the development of a virtual reality for designing and editing 3D models. In addition, users can design and edit 3D models easily and more intuitively. The performance of the system was evaluated by using a 3D printer to make a realistic 3D model obtained from the proposed system.
2 Related Work
This section presents research works related to the proposed system that consists of virtual environment for collaboration system, designing and editing 3D model, and simulated 3D environment.
Virtual Environment for Collaboration
In virtual environment, the communication systems for collaboration mostly used avatar to represent the person and provided the clear voice to enhance the user’s understanding. The collaboration system concerns about the point of view which users can see the object in the same position with the different views. Space time [5], the system of this paper supports users to see the same object even if users are in the different view. For example, users can request to see the same view as avatar’s view by touching shoulder of avatar. Users can clone the other avatar, resize and place cloned avatar to any position. Then, the other avatar can see their cloned avatar and can choose to teleport to see the view. This method can be applied to change perspective of collaborative work.
Designing and Editing Model
To create the 3D objects with the virtual reality technology and Unity engine, most developers start by understanding meshes. The mesh manipulation is to draw the surface of the object [6]. The base of a mesh that developers should know is about vertices and lines because a mesh consists of lines that connect these vertices. Data of a mesh consists of four components including vertices, line or edges, triangle, and UV map. Triangle is formed when line is connected with three vertices in 3D space. UV map is about textures around the object’s space. Furthermore, developers could bring mesh manipulation to create and edit 3D objects in Unity because it is easy to understand and can be developed in a variety of ways.
Simulated 3D Environment
There are many sensors for scanning 3D environment and generating point cloud data for rendering mesh in real-time. Kinect RGB-D camera is a sensor that can provide RGB, depth and body skeletal information. Virtualized Reality using Depth Camera Point Clouds [7] scanned real environment using the Kinect into VR and reconstructed mesh in real-time using Unity.
3 System Scenario
In the proposed system, two users are in the simulated environment scanned from the first user’s environment. They could communicate with each other, see the user’s avatar [8], and the first user’s rendered environment. The user could design and edit the 3D model with another user in the virtual environment. Moreover, users could export the 3D model into an STL file for printing.
4 System Overview
As shown in Fig. 1, the system consisted of three main parts which are the simulated environment part, the designing and editing section, and the multiuser supporting function. The simulated environment part managed the connection between the Microsoft Kinect 2 and Unity. Microsoft Kinect 2 was used to collect the depth data from an infrared camera and colors data from RGB camera. Simulated environment system mapped the depth positions with positions of colors data, created the point cloud, and simulated the virtual environment through the Unity in Fig. 2. In the proposed system, as shown in Fig. 3 users must wear the VIVE headset to get into the virtual environment where they can work together and use controllers to perform the 3D model building. The designing and editing section managed the mapping the users’ actions through the controllers into the model building commands while the multiuser support function acquired the users’ action data to perform the avatar’s movement.
4.1 The Designing and Editing Section
The designing and editing system flow is shown in Fig. 4. After the user performed an action, the action command was acquired though HTC VIVE controllers, which are connected to Unity engine through streamVR, a Unity plugin for VR development. Unity engine then performed corresponding actions based on the user’s commands and sent the visualization data back to the HTC VIVE headset for the user in order to see the updated model through the streamVR [9]. There are 10 features that the software can perform consists of move, create shape, size, paint, cut, copy, export, and extrude.
4.2 Multiuser Support Function
Users could perform the collaborative task to build a 3D model in the VR through Photon engine [10], network engine and multiplayer platform. When the user connects the photon server successful then users will be a client in the system. Meanwhile, they can see each other’s avatar and their actions as they were in the same virtual environment, and they can see environment scanned from Kinect via point cloud.
4.3 Simulated Environment
The simulated environment part obtained the depth and color data from the Microsoft Kinect 2 and used them to create the shader and point cloud through [11]. Users could build 3D models that were matched with the actual environment using the proposed virtual reality system.
5 System Evaluation
There are 3 system evaluations which cover system performance, usability, and value for specific task.
5.1 System Performance
Frame rates is the number of frames that are displayed per second used in photography, video, and game. FPS (Frames Per Second) is used to measure frame rate. The common frame rate used for creating a smooth appearance is 24 FPS and the acceptable frame rate in video games should be around 30 to 60 FPS. Frame rate depends on quantity of data in the system. After test the multiplayer system, the proposed system showed maximum frame rate of 75 FPS, minimum fame rate of 25 FPS, and the average of frame rate was equivalent to 55.46 FPS, which is greater than the common frame rates. The graph in Fig. 5 shows the changed frame rates in a 15-minute period.
5.2 Usability
Usability indicates that the users can use this system easily and more intuitively or not. Usability tests were performed with 10 participants by asking users to finish the task. There are 2 different type of participants. The first type covers participants who have experience about designing and editing 3D model and the second ones are participants who never have experience about designing and editing 3D model. Mission for user testing is to see a scanned puzzle sorter box by Kinect, to help each other to design and edit 3D model and compare it to a given box in virtual environment. Subsequently, 3D model is printed and inserted to fit on a box in the actual environment. Furthermore, the hypothesis of this mission is that two users should spend less time than one user to complete the mission. After test the system, one user spent an average time of 7.27 min and two users spent an average of 4.32 min, subsequently, users were given the survey questionnaires call “USE Questionnaire” [12] to evaluate the usability of the system including ease of use, ease of learning, and satisfaction. In addition, The measurement criteria by having the option of Likert Scale, was divided into 5 options: strongly disagree (1), disagree (2), neutral (3), agree (4), and strongly agree (5). The result of a survey showed that the average of the ease of use is 3.125 (62.5%), of ease of learning is 3.5 (70%), and satisfaction is 3.875 (77.5%).
5.3 Value of Specific Task
The specific task of this system is to help users to design and edit 3D model together to complete the task. The system was evaluated by software and hardware testing. Software testing is about the completion of the given task. Hardware testing is to print 3D model from the virtual collaborative work to fit with the actual environment.
5.4 Discussions and Conclusions
From the result, this proposed system can help multi users to create, edit 3D models together, export file to .STL file for printing the 3D model to fit with the actual environment. The user can see and communicate with another one via the avatar by using virtual reality and can see the same 3D scanned environment in real-time. The result of the survey showed that users were satisfied with the system because the average of ease of use, ease of learning, and satisfaction were greater than 50%. However, most users who used VR for the first time needed some tutorials or examples of how to use this system. In addition, users could collaboratively apply the features within the proposed system to create a prototype-like object. The bottleneck of this system is the network. For the current version, the users cannot join the same room from the different network. In the future work, this system needs more features for users to support the design of more complicated models and better embedded tutorials.
References
Unity Engine. https://unity.com. Accessed 9 Dec 2019
The multiplayer high-level API. https://docs.unity3d.com. Accessed 9 Dec 2019
D’Orazio, D., Savov, V.: Valve’s VR headset is called the Vive and it’s made by HTC (2015). https://www.theverge.com. Accessed 9 Dec 2019
Shibo, L., Qing, Z.: A new approach to calibrate range image and color image from Kinect. In Proceeding 4. International Conference Intelligent Human-Machine Systems and Cybernetics (IHMSC), vol. 2, pp. 252–255, Nanchang, China. IEEE (2012)
Xia, H., Herscher, S., Perlin, K., Wigdor, D.: Spacetime: enabling fluid individual and collaborative editing in virtual reality. In: The 31st Annual ACM Symposium on User Interface Software and Technology, pp. 853–866 (2018)
Mesh manipulation – Sean Duffy. https://www.raywenderlich.com/3169311-runtime-mesh-manipulation-with-unity. Accessed 15 Feb 2020
Cazamias, J., Raj, A.S.: Virtualized Reality Using Depth Camera Point Clouds (2016)
Creating the Avatar. https://docs.unity3d.com/. Accessed 12 Dec 2019
SteamVR Plugin. https://assetstore.unity.com/. Accessed 15 Dec 2019
Photon Intro. https://doc.photonengine.com/en-us/pun/current/getting-started/pun-intro. Accessed 24 Jan 2020
Sugino, H.: Kinect Study Unity. https://github.com/sugi-cho/KinectStudy-Unity/. Accessed 20 Dec 2019
Measuring Usability with the USE Questionnaire. https://www.researchgate.net/publication/230786746_Measuring_Usability_with_the_USE_Questionnaire. Accessed 15 Dec 2019
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Larpkiattaworn, N., Chareonwuttikajorn, P., Punya, P., Charoenseang, S. (2020). Multiuser Virtual Reality for Designing and Editing 3D Models. In: Stephanidis, C., Antona, M. (eds) HCI International 2020 - Posters. HCII 2020. Communications in Computer and Information Science, vol 1225. Springer, Cham. https://doi.org/10.1007/978-3-030-50729-9_12
Download citation
DOI: https://doi.org/10.1007/978-3-030-50729-9_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-50728-2
Online ISBN: 978-3-030-50729-9
eBook Packages: Computer ScienceComputer Science (R0)