Skip to main content

Generating Synthetic LiDAR Point Cloud Data for Object Detection Using the Unreal Game Engine

  • Conference paper
  • First Online:
Design Science Research for a Resilient Future (DESRIST 2024)

Abstract

Object detection based on artificial intelligence is ubiquitous in today’s computer vision research and application. The training of the neural networks for object detection requires large and high-quality datasets. Besides datasets based on image data, datasets derived from point clouds offer several advantages. However, training datasets are sparse and their generation requires a lot of effort, especially in industrial domains. A solution to this issue offers the generation of synthetic point cloud data. Based on the design science research method, the work at hand proposes an approach and its instantiation for generating synthetic point cloud data based on the Unreal Engine. The point cloud quality is evaluated by comparing the synthetic cloud to a real-world point cloud. Within a practical example the applicability of the Unreal Game engine for synthetic point cloud generation could be successfully demonstrated.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Alzubaidi, L., et al.: Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. J. Big Data 8, 53 (2021). https://doi.org/10.1186/s40537-021-00444-8

    Article  Google Scholar 

  2. You, K., Long, M., Cao, Z., Wang, J., Jordan, M.I.: Universal Domain Adaptation Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (2019)

    Google Scholar 

  3. Chen, X., Ma, H., Wan, J., Li, B., Xia, T.: Multi-view 3D object detection network for autonomous driving. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, pp. 6526–6534. IEEE (2017). https://doi.org/10.1109/CVPR.2017.691

  4. Hodapp, J., Schiemann, M., Bilous, V., Cottbus-Senftenberg, B.T., Arcidiacono, C.S., Reichenbach, M.: Advances in Automated Generation of Convolutional Neural Networks from Synthetic Data in Industrial Environments, vol. 7 (2020)

    Google Scholar 

  5. Ritter, F., et al.: Medical image analysis. IEEE Pulse 2, 60–70 (2011). https://doi.org/10.1109/MPUL.2011.942929

    Article  Google Scholar 

  6. Li, J., Gotvall, P.-L., Provost, J., Akesson, K.: Training convolutional neural networks with synthesized data for object recognition in industrial manufacturing. In: 2019 24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Zaragoza, Spain, pp. 1544–1547. IEEE (2019). https://doi.org/10.1109/ETFA.2019.8869484

  7. Duemmel, J., Kostik, V., Oellerich, J.: Generating synthetic training data for assembly processes advances in production management systems. In: Artificial Intelligence for Sustainable and Resilient Production Systems, pp. 119–128 (2021)

    Google Scholar 

  8. Mazzetto, M., Puttow Southier, L.F., Teixeira, M., Casanova, D.: Automatic classification of multiple objects in automotive assembly line. In: 24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), pp. 363–369 (2019). https://doi.org/10.1109/ETFA.2019.8869063

  9. Mousavi, M., Khanal, A., Estrada, R.: AI playground: unreal engine-based data ablation tool for deep learning. In: Bebis, G., et al. (eds.) Advances in Visual Computing, pp. 518–532. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-64559-5_41

    Chapter  Google Scholar 

  10. Wu, C., et al.: MotorFactory: a blender add-on for large dataset generation of small electric motors. Procedia CIRP 106, 138–143 (2022). https://doi.org/10.1016/j.procir.2022.02.168

    Article  Google Scholar 

  11. Kim, S.-H., Choe, G., Ahn, B., Kweon, I.S.: Deep representation of industrial components using simulated images. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, Singapore, pp. 2003–2010. IEEE (2017). https://doi.org/10.1109/ICRA.2017.7989232

  12. Brekke, Å., Vatsendvik, F., Lindseth, F.: Multimodal 3D object detection from simulated pretraining. In: Bach, K., Ruocco, M. (eds.) Nordic Artificial Intelligence Research and Development, pp. 102–113. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-35664-4_10

    Chapter  Google Scholar 

  13. Fang, J., et al.: Simulating LIDAR Point Cloud for Autonomous Driving using Real-world Scenes and Traffic Flows. arXiv:1811.07112 (2018)

    Google Scholar 

  14. Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., Koltun, V.: CARLA: An Open Urban Driving Simulator. arXiv:1711.03938 (2017)

    Google Scholar 

  15. Johnson-Roberson, M., Barto, C., Mehta, R., Sridhar, S.N., Rosaen, K., Vasudevan, R.: Driving in the Matrix: Can Virtual Worlds Replace Human-Generated Annotations for Real World Tasks? arXiv:1610.01983 (2017)

    Google Scholar 

  16. Müller, M., Casser, V., Lahoud, J., Smith, N., Ghanem, B.: Sim4CV: a photo-realistic simulator for computer vision applications. Int. J. Comput. Vision 126, 902–919 (2018). https://doi.org/10.1007/s11263-018-1073-7

    Article  Google Scholar 

  17. Ros, G., Sellart, L., Materzynska, J., Vazquez, D., Lopez, A.M.: The SYNTHIA dataset: a large collection of synthetic images for semantic segmentation of urban scenes. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, pp. 3234–3243. IEEE (2016). https://doi.org/10.1109/CVPR.2016.352

  18. Wu, B., Zhou, X., Zhao, S., Yue, X., Keutzer, K.: SqueezeSegV2: improved model structure and unsupervised domain adaptation for road-object segmentation from a LiDAR point cloud. In: 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, pp. 4376–4382. IEEE (2019). https://doi.org/10.1109/ICRA.2019.8793495

  19. Hevner, A., Chatterjee, S.: Design science research in information systems. In: Hevner, A., Chatterjee, S. (eds.) Design Research in Information Systems, pp. 9–22. Springer, Boston (2010). https://doi.org/10.1007/978-1-4419-5653-8_2

    Chapter  Google Scholar 

  20. Peffers, K., Tuunanen, T., Rothenberger, M.A., Chatterjee, S.: A design science research methodology for information systems research. J. Manag. Inf. Syst. 24, 45–77 (2007). https://doi.org/10.2753/MIS0742-1222240302

    Article  Google Scholar 

  21. Gregor, S., Hevner, A.R.: Positioning and presenting design science research for maximum impact. MISQ 37, 337–355 (2013). https://doi.org/10.25300/misq/2013/37.2.01

    Article  Google Scholar 

  22. Shen, Y., Yang, Y., Yan, M., Wang, H., Zheng, Y., Guibas, L.: Domain adaptation on point clouds via geometry-aware implicits. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, pp. 7213–7222. IEEE (2022). https://doi.org/10.1109/CVPR52688.2022.00708

  23. Korakakis, M., Mylonas, P., Spyrou, E.: A short survey on modern virtual environments that utilize AI and synthetic data. In: Mediterranean Conference on Information Systems (MCIS) (2018)

    Google Scholar 

  24. Dworak, D., Ciepiela, F., Derbisz, J., Izzat, I., Komorkiewicz, M., Wojcik, M.: Performance of LiDAR object detection deep learning architectures based on artificially generated point cloud data from CARLA simulator. In: 2019 24th International Conference on Methods and Models in Automation and Robotics (MMAR), Międzyzdroje, Poland, pp. 600–605. IEEE (2019). https://doi.org/10.1109/MMAR.2019.8864642

  25. Csurka, G.: Domain Adaptation in Computer Vision Applications. Springer, Cham (2017)

    Book  Google Scholar 

  26. Zhang, W., Li, W., Xu, D.: SRDAN: scale-aware and range-aware domain adaptation network for cross-dataset 3D object detection. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, pp. 6765–6775. IEEE (2021). https://doi.org/10.1109/CVPR46437.2021.00670

  27. Nowruzi, F.E., Kapoor, P., Kolhatkar, D., Hassanat, F.A., Laganiere, R., Rebut, J.: How much real data do we actually need: analyzing object detection performance using synthetic and real data. In: International Conference on Machine Learning (ICML 2019) (2019)

    Google Scholar 

  28. Andrade, A.: Game engines: a survey. EAI Endorsed Trans. Game-Based Learn. 2, 150615 (2015). https://doi.org/10.4108/eai.5-11-2015.150615

    Article  Google Scholar 

  29. Paul, P.S., Goon, S., Bhattacharya, A.: History and comparative study of modern game engines. Int. J. Adv. Comput. Math. Sci. 3 (2012)

    Google Scholar 

  30. Sanders, A.: An Introduction to Unreal Engine 4. Taylor & Francis CRC Press, Boca Raton (2017)

    Google Scholar 

  31. Židek, K., Lazorík, P., Piteľ, J., Pavlenko, I., Hošovský, A.: Automated training of convolutional networks by virtual 3D models for parts recognition in assembly process. In: Trojanowska, J., Ciszak, O., Machado, J.M., Pavlenko, I. (eds.) Advances in Manufacturing II, pp. 287–297. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-18715-6_24

    Chapter  Google Scholar 

  32. Tavakoli, H., Walunj, S., Pahlevannejad, P., Plociennik, C., Ruskowski, M.: Small Object Detection for Near Real-Time Egocentric Perception in a Manual Assembly Scenario, vol. 5 (2021)

    Google Scholar 

  33. Tang, P., Guo, Y., Li, H., Wei, Z., Zheng, G., Pu, J.: Image dataset creation and networks improvement method based on CAD model and edge operator for object detection in the manufacturing industry. Mach. Vis. Appl. 32, 111 (2021). https://doi.org/10.1007/s00138-021-01237-y

    Article  Google Scholar 

  34. Cohen, J., Crispim-Junior, C., Grange-Faivre, C., Tougne, L.: CAD-based learning for egocentric object detection in industrial context. In: Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Valletta, Malta, pp. 644–651. SCITEPRESS - Science and Technology Publications (2020). https://doi.org/10.5220/0008975506440651

  35. Andulkar, M., Hodapp, J., Reichling, T., Reichenbach, M., Berger, U.: Training CNNs from synthetic data for part handling in industrial environments. In: 2018 IEEE 14th International Conference on Automation Science and Engineering (CASE), Munich, Germany, pp. 624–629. IEEE (2018). https://doi.org/10.1109/COASE.2018.8560470

  36. Zamora-Hernandez, M.-A., Castro-Vargas, J.A., Azorin-Lopez, J., Garcia-Rodriguez, J.: ToolSet: a real-synthetic manufacturing tools and accessories dataset. In: Herrero, Á., Cambra, C., Urda, D., Sedano, J., Quintián, H., Corchado, E. (eds.) 15th International Conference on Soft Computing Models in Industrial and Environmental Applications (SOCO 2020), pp. 800–809. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-57802-2_77

    Chapter  Google Scholar 

  37. Calli, B., Singh, A., Walsman, A., Srinivasa, S., Abbeel, P., Dollar, A.M.: The YCB object and model set: towards common benchmarks for manipulation research. In: 2015 International Conference on Advanced Robotics (ICAR), Istanbul, Turkey, pp. 510–517. IEEE (2015). https://doi.org/10.1109/ICAR.2015.7251504

  38. Gschwandtner, M., Kwitt, R., Uhl, A., Pree, W.: BlenSor: blender sensor simulation toolbox. In: Bebis, G., et al. (eds.) International Symposium on Visual Computing (ISVC), vol. 6939, pp. 199–208. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-24031-7_20

    Chapter  Google Scholar 

  39. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. arXiv:2003.08934 (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mathias Eggert .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Eggert, M., Schade, M., Bröhl, F., Moriz, A. (2024). Generating Synthetic LiDAR Point Cloud Data for Object Detection Using the Unreal Game Engine. In: Mandviwalla, M., Söllner, M., Tuunanen, T. (eds) Design Science Research for a Resilient Future. DESRIST 2024. Lecture Notes in Computer Science, vol 14621. Springer, Cham. https://doi.org/10.1007/978-3-031-61175-9_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-61175-9_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-61174-2

  • Online ISBN: 978-3-031-61175-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics