Abstract
Detection and avoidance of dynamic obstacles is an integral part of social robot navigation. Reliable human detection depends on camera identification, which can be achieved only using computationally expensive algorithms running on a Graphical Processing Unit (GPU). The process is time consuming, causing latency and it cannot be run on low-end systems. Human detection and tracking also requires lidar data fusion to ensure proper localization. In this work, we propose a detection strategy that allows the fusion system to run with lower camera frame rates, and hence decrease latency and computational requirements considerably. We show the effectiveness of the proposed approach in simulation.
This work was supported by AM2R project “Mobilizing Agenda for business innovation in the Two Wheels sector” funded by PRR - Recovery and Resilience Plan and by the Next Generation EU Fund, under reference C644866475-00000012|7253; HAVATAR project funded by FCT - Fundação para a Ciência e a Tecnologia, under reference PTDC/EEI-ROB/1155/2020; and Ultrabot project, funded by the Portuguese National Innovation Agency (ANI), under reference CENTRO-01-0247-FEDER-072644.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Arras, K.O., Mozos, O.M., Burgard, W.: Using boosted features for the detection of people in 2D range data. In: Proceedings 2007 IEEE International Conference on Robotics and Automation, pp. 3402–3407 (2007). ISSN:1050-4729
Bozorgi, H., Truong, X.T., La, H.M., Ngo, T.D.: 2D laser and 3D camera data integration and filtering for human trajectory tracking. In: 2021 IEEE/SICE International Symposium on System Integration (SII), pp. 634–639 (2021). ISSN:2474-2325
Che, Y., Okamura, A.M., Sadigh, D.: Efficient and trustworthy social navigation via explicit and implicit robot-human communication. IEEE Trans. Robot. 36(3), 692–707 (2020)
Dogru, S., Silva, C.A., Marques, L.: Monocular person localization with lidar fusion for social navigation. In: 2023 European Conference on Mobile Robots (ECMR), pp. 1–7 (2023)
González, A., Villalonga, G., Xu, J., Vázquez, D., Amores, J., López, A.M.: Multiview random forest of local experts combining RGB and LiDAR data for pedestrian detection. In: 2015 IEEE Intelligent Vehicles Symposium (IV), pp. 356–361 (2015). ISSN:1931-0587
Hanhirova, J., Kämäräinen, T., Seppälä, S., Siekkinen, M., Hirvisalo, V., Ylä-Jääski, A.: Latency and throughput characterization of convolutional neural networks for mobile computer vision. In: Proceedings of the 9th ACM Multimedia Systems Conference (MMSys 2018), pp. 204–215. Association for Computing Machinery, New York (2018)
Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press (2004)
Kalman, R.E.: A new approach to linear filtering and prediction problems. Trans. ASME–J. Basic Eng. 82(Series D), 35–45 (1960)
Kidono, K., Miyasaka, T., Watanabe, A., Naito, T., Miura, J.: Pedestrian recognition using high-definition LiDAR. In: 2011 IEEE Intelligent Vehicles Symposium (IV), pp. 405–410 (2011). ISSN:1931-0587
Kirby, R., Simmons, R., Forlizzi, J.: COMPANION: a constraint-optimizing method for person-acceptable navigation. In: The 18th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2009), pp. 607–612 (2009). ISSN:1944-9437
Kollmitz, M., Hsiao, K., Gaa, J., Burgard, W.: Time dependent planning on a layered social cost map for human-aware robot navigation. In: 2015 European Conference on Mobile Robots (ECMR), pp. 1–6. IEEE (2015)
Kundegorski, M.E., Breckon, T.P.: A photogrammetric approach for real-time 3D localization and tracking of pedestrians in monocular infrared imagery. In: Optics and Photonics for Counterterrorism, Crime Fighting, and Defence X; and Optical Materials and Biomaterials in Security and Defence Systems Technology XI, vol. 9253, pp. 139–154. SPIE (2014)
Lattanzi, E., Contoli, C., Freschi, V.: Do we need early exit networks in human activity recognition? Eng. Appl. Artif. Intell. 121, 106035 (2023)
Lewandowski, B., Liebner, J., Wengefeld, T., Müller, S., Gross, H.M.: Fast and robust 3D person detector and posture estimator for mobile robotic applications. In: 2019 International Conference on Robotics and Automation (ICRA), pp. 4869–4875 (2019). ISSN:2577-087X
Niu, Y., Xu, Z., Xu, E., Li, G., Huo, Y., Sun, W.: Monocular pedestrian 3D localization for social distance monitoring. Sensors 21(17) (2021)
Panda, P., Sengupta, A., Roy, K.: Conditional deep learning for energy-efficient and enhanced pattern recognition. In: 2016 Design, Automation and Test in Europe Conference and Exhibition (DATE), pp. 475–480 (2016)
Premebida, C., Carreira, J., Batista, J., Nunes, U.: Pedestrian detection combining RGB and dense LiDAR data. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4112–4117 (2014). ISSN:2153-0866
Rashid, N., Demirel, B.U., Abdullah Al Faruque, M.: AHAR: adaptive CNN for energy-efficient human activity recognition in low-power edge devices. IEEE Internet Things J. 9(15), 13041–13051 (2022)
Scardapane, S., Scarpiniti, M., Baccarelli, E., Uncini, A.: Why should we add early exits to neural networks? Cogn. Comput. 12(5), 954–966 (2020)
Schlosser, J., Chow, C.K., Kira, Z.: Fusing LiDAR and images for pedestrian detection using convolutional neural networks. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 2198–2205 (2016)
Silva, C.A., Dogru, S., Marques, L.: Mobile robot navigation in dynamic environments taking into account obstacle motion in Costmap construction. In: ROBOT2022: Fifth Iberian Robotics Conference: Advances in Robotics. LNNS, vol. 589, pp. 235–246. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-21065-5_20
Silva, C.A., Dogru, S., Marques, L.: Camera and LiDAR fusion for robust 3D person detection in indoor environments. In: 2023 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC). IEEE, Tomar (2023)
Sisbot, E.A., Marin-Urias, L.F., Alami, R., Simeon, T.: A human aware mobile robot motion planner. IEEE Trans. Rob. 23(5), 874–883 (2007)
Spinello, L., Siegwart, R.: Human detection using multimodal and multidimensional features. In: 2008 IEEE International Conference on Robotics and Automation, pp. 3264–3269 (2008). ISSN:1050-4729
Teja Singamaneni, P., Favier, A., Alami, R.: Human-aware navigation planner for diverse human-robot interaction contexts. In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5817–5824 (2021). ISSN:2153-0866
Thomas, J., Vaughan, R.: Right of way, assertiveness and social recognition in human-robot doorway interaction. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 333–339 (2019). ISSN:2153-0866
Wang, C.Y., Bochkovskiy, A., Liao, H.Y.M.: YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv preprint arXiv:2207.02696 (2022)
Yang, F., Peters, C.: Social-aware navigation in crowds with static and dynamic groups. In: 2019 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), pp. 1–4 (2019). ISSN:2474-0489
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Silva, C.A., Dogru, S., Marques, L. (2024). Improving Energy Performance of Camera Lidar Fusion by Intermittent Human Detection for Social Navigation. In: Marques, L., Santos, C., Lima, J.L., Tardioli, D., Ferre, M. (eds) Robot 2023: Sixth Iberian Robotics Conference. ROBOT 2023. Lecture Notes in Networks and Systems, vol 976. Springer, Cham. https://doi.org/10.1007/978-3-031-58676-7_10
Download citation
DOI: https://doi.org/10.1007/978-3-031-58676-7_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-58675-0
Online ISBN: 978-3-031-58676-7
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)