Keywords

1 Introduction

Huge destructive earthquakes regularly occur throughout the world. The U.S. Geological Survey reported that huge earthquakes, of magnitude greater than 7.0, occurred 164 times over the last ten yearsFootnote 1. In Japan, a recent huge earthquake, which is known as the April 2011 Fukushima Earthquake, struck the east coast of Japan. Although the resulting tsunami devastated Fukushima, other areas, such as Tokyo, which were not hit by the tsunami, were damaged by the earthquake itself. Shelves were toppled, windows were broken, and traffic came to a standstill.

Earthquake simulators are very important for disaster prevention training, especially in earthquake-prone countries. Considering such circumstances, several systems that allow people to experience simulated earthquakes have been produced. In Japan, although an Earthquake simulation vehicle, which consists of a room and mechanism to physically simulate huge earthquakes, is well known, most people have little opportunity to take advantage of the simulation vehicle. With the recent growth of VR technology, several VR earthquake simulation systems have been proposed [1,2,3]. However, developing three-dimensional VR content for such systems is expensive.

Allowing people to experience a simulated earthquake in their own home is thought to be an effective means of clarifying potential risks and providing knowledge to avoid dangers such as items falling on beds. Based on their experiences, people may then rearrange furniture to reduce potential dangers or make evacuation plans. As such, the development of a low-cost system with which users can construct high-quality 3D models of their surroundings is strongly desired.

We herein propose a novel AI-based VR earthquake simulator that can easily simulate arbitrary real indoor environments. Users can experience simulated earthquakes in VR rooms constructed by scanning users’ rooms with low-cost RGB-D sensors. The rooms are reconstructed in VR space as several planes with high-resolution surface textures. Moreover, objects in the rooms are recognized by a recently proposed deep-learning based object detection technique and translated into 3D models. Finally, the system simulates earthquakes in the VR room in real time by applying a physics engine. In order to evaluate the system, we conducted a user experience experiment using VR earthquake content and found that users can experience earthquakes in their virtual rooms without feeling artificiality.

2 AI-Based VR Earthquake Simulation System

In this section, we explain the proposed AI-based VR earthquake simulation system. An overview of the system is shown in Fig. 1. The system process is executed as follows:

Fig. 1.
figure 1

Overview of the system.

  1. 1.

    Scan the entire room using a multi-RGBD sensor system.

  2. 2.

    Calculate the 3D point cloud of the room using the SLAM framework.

  3. 3.

    Separate the point cloud into planes.

  4. 4.

    Apply object detection to each texture of the constructed planes.

  5. 5.

    Project the reconstructed room via VR.

  6. 6.

    Simulate an earthquake by physical movement.

2.1 Scanning and Mapping

We use RGBD sensors, which are less expensive compared to other 3D scanners, such as LIDAR, to scan the entire room. Moreover, visual simultaneous localization and mapping (SLAM) is commonly used to obtain a 3D model using RGBD sensors [4]. However, since the angle of view is narrow, scanning the entire room is far more labor intensive. Furthermore, in the case of a room with few textures, since feature points such as SURF cannot be sufficiently observed, localization estimation via SLAM fails and a distorted map if generated. In order to address these problems, we developed a multi-RGBD sensor system, as shown in Fig. 2. By arranging calibrated RGBD sensors on the circumference, it is possible to simultaneously acquire all of the surrounding RGBD data. By moving through a room with the sensor and performing SLAM, it is possible to easily acquire the 3D space of the entire large room (see Fig. 3). In the present study, we use nine XTION sensors, the range of depth of which is up to eight meters, so that the sensors can sufficiently scan the indoor environment. As compared with the case of using a single RGBD sensor, the accuracy of localization estimation is improved, and hence point cloud data with less distortion can be obtained. The system also works outdoors in conditions of weak daylight and in areas that are not too wide for the sensors to capture depth data.

Fig. 2.
figure 2

Configuration of the multi-RGBD sensor system. A surrounding RGBD view can be acquired immediately so that the entire room can be localized and mapped easily without distortion.

Fig. 3.
figure 3

SLAM by multi-RGBD data. Loop closing is successful and the map is not distorted. (Color figure online)

2.2 Interpretation into Planes

Although a dense undistorted point cloud can be acquired, the point cloud is insufficient for immersive interactive VR. The system must recognize the structure and shape of scanned rooms in order to enable proper physical simulation. The computational cost required for an interactive system to visualize numerous points is high. In order to deal with these problems, we implemented a framework by which AI interprets point cloud 3D space as planes.

First, meshes between locally nearest points are constructed as local planes, as shown in Fig. 4. The set of points to be connected was acquired from the connectivity of the original images when they were scanned. These planes were then reconstructed into global planes by applying a statistical plane estimation method considering the similarity between local planes based on the distances between the orientations and positions of the planes. Moreover, the texture of each plane was also created by merging partial RGB images corresponding to the plane. Pixels taken from nearby or in front of the plane at the time of scanning have priority with respect to selection. As a result, each of the walls, floors, and ceilings was automatically recognized as an unseparated plane with a texture, as shown in Fig. 5.

Fig. 4.
figure 4

Converting a point cloud into planes. Local planes are constructed from depth data, and global planes are then robustly estimated from these local planes.

Fig. 5.
figure 5

Recognition result of room structure. The entire room is reconstructed with high-quality textures.

2.3 Object Detection

The methodology of interpreting the point cloud as planes mentioned in Sect. 2.2 enables AI to perform high-quality object recognition by applying a deep-learning framework. Recently, advances in deep learning, such as advances in two-dimensional image object recognition, have been realized. As a result, we can obtain unseparated textures of entire planes, to which high-performance deep-learning object detection can be applied. In the present study, we use YOLO v2 [5], which is a state-of-the-art real-time object detector, to detect the interiors of items such as clocks, chairs, and books, as shown in Fig. 6.

Fig. 6.
figure 6

Example of a detection result for a reconstructed plane texture.

3 VR Earthquake Simulation

The system can generate an interactive virtual room, which is a projection of a real room. The system then simulates earthquakes using a physics engine. Physical simulation is computationally expensive, which makes real-time operation difficult. However, in recent years, highly powerful graphics processing units (GPUs), which enable acceleration of general parallelizable calculations, have been produced. In the present study, we use the PhysX physics engine, which is maintained by NVidia.

In the simulation, we use real earthquake data measured by an accelerometer-type seismograph of the Japan Meteorological AgencyFootnote 2. In order to use the time sequential acceleration data in PhysX, we convert accelerations to displacements by applying an IIR filter that reflects the frequency response characteristicsFootnote 3 [6] and then animate the floor plane. PhysX can simulate the propagation of energy from an earthquake to items on the floor.

4 Evaluation

We conducted an experiment to evaluate user experience associated with the proposed VR earthquake simulator. We evaluate the proposed system from the viewpoints of (i) the effectiveness of the reconstruction for providing a comfortable and immersive VR environment and (ii) the ability to frighten users using only visual cues. Based on the results of this experiment, we simulated a 3D VR earthquake, in which horizontal planes were automatically recognized as boxes or tables and were then made to topple onto the floor. Moreover, items that were recognized as “books” by the AI were made to topple to the ground when a strong earthquake was simulated (see Fig. 7).

Fig. 7.
figure 7

Example of simulated scenes, in which a large degree of up/down translation of the floor can be observed.

4.1 Condition

We surveyed eight subjects, who were university students, technical fellows, or researchers who worked in the same office building every day, and were thus familiar with the constructed environment, allowing for an immersive virtual environment. The subjects reported various levels of experience with immersive VR.

In the present study, we use HTC Vive, a VR system that implements a head-mounted display (HMD) and its position tracker, and head direction and position can be reflected in VR space in real time.

During the experiment, the subjects experienced a simulation of the April 2011 Fukushima Earthquake. We used the data observed at Okeya Town, in Miyagi Prefecture, where the magnitude of the maximum acceleration was 479.1 cm/s2. The parameters for calculating the IIR filter are as follows: the period of the seismograph is 6.0 s, the damping constant is 0.55, and the sampling interval is 0.01 s.

4.2 Protocol

First, we asked each subject to go into a room other than the room that was to be reconstructed. An HMD was worn by each subject while sitting on a fixed stool. A table was placed in front of the stool in order to secure each subject’s initial body orientation, but we provided no instructions as to body or head orientation. Based on the initial orientation from the chair, a bookshelf was placed in the VR room so as to enhance the user experience. Before starting the simulation, we told the subject that we would stop the experiment whenever s/he reported any discomfort. After starting the experiment, the earthquake was simulated for approximately two minutes.

After the experiment, we asked each subject to answer a questionnaire with a seven-point scale (see Table 1). Table 2 shows the questions contained in the questionnaire. The reason for the response to each question was also asked to be provided in the form of free comments.

Table 1. Seven grades for each question.
Table 2. Questions posed in the questionnaire.

4.3 Result

The questionnaire results are shown in Fig. 8. Most of the subjects were in agreement on all questions. The results of Q1 and related comments, such as “The room seemed familiar.” or “The room seemed neither realistic nor fictional.” indicate that the proposed method was capable of constructing high-quality 3D spaces that people were used to seeing in daily life. The results of Q2 and related comments, such as “The shaking and changing of view felt realistic” or “That was what I feel unpleasant during a real earthquakes”, indicate that the system could properly simulate earthquakes and that the VR space did not adversely affect the simulation of the earthquake. However, the response of subject #8 to Q2 was “definitely disagree”, and the subject commented that “The quake was slow. It seemed as if the camera were shaking.” There are several causes to be considered: (i) the original earthquake consisted of primarily long-period wave components, (ii) very-short-period waves were filtered and were converted from accelerations to displacements, and (iii) camera positions in the VR system were fixed with respect to the HMD and were not influence by the earthquake. Item (i) indicates that the subject felt as if they were experiencing a real earthquake. Item (ii) could be solved by adding extra-short-period waves. Item (iii) could also be solved by applying a physical human head model to the physical simulation. Items (ii) and (iii) can be easily implemented in the proposed system.

Fig. 8.
figure 8

Questionnaire results.

5 Conclusion

We proposed a novel earthquake simulation system that can easily simulate arbitrary, real, indoor environments by using AI technologies. The system can recognize and reconstruct real 3D spaces and project them into VR spaces into which users can become immersed and interact with. We developed a system for scanning an indoor environment and reconstructing the environment by planes with high-quality textures and recognize interiors on the textures by applying deep-learning object detection. A physical simulation of an earthquake was performed in the reconstructed VR space. The proposed system can be extended to various kinds of VR experiences that reflect and reconstruct real circumstances.