Skip to main content
Log in

Multi-sensor-based detection and tracking of moving objects for relative position estimation in autonomous driving conditions

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

Moving object detection (MOD) technology was combined to include detection, tracking and classification which provides information such as the local and global position estimation and velocity from around objects in real time at least 15 fps. To operate an autonomous driving vehicle on real roads, a multi-sensor-based object detection and classification module should carry out simultaneously processing in the autonomous system for safe driving. Additionally, the object detection results must have high-speed processing performance in a limited HW platform for autonomous vehicles. To solve this problem, we used the Redmon in DARKNET-based (https://pjreddie.com/darknet/yolo) deep learning method to modify a detector that obtains the local position estimation in real time. The aim of this study was to get the local position information of a moving object by fusing the information from multi-cameras and one RADAR. Thus, we made a fusion server to synchronize and converse the information of multi-objects from multi-sensors on our autonomous vehicle. In this paper, we introduce a method to solve the local position estimation that recognizes the around view which includes the long-, middle- and short-range view. We also describe a method to solve the problem caused by a steep slope and a curving road condition while driving. Additionally, we introduce the results of our proposed MOD-based detection and tracking estimation to achieve a license for autonomous driving in KOREA.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

References

  1. Gebremeskel GB, Chai Y, Yang Z (2014) The paradigm of big data for augmenting internet of vehicle into the intelligent cloud computing systems. In: Internet of vehicles—technologies and services, pp 247–261

  2. Alam KM, Saini M, El Saddik A (2014) A social network of vehicles under internet of things. In: Internet of vehicles—technologies and services, pp 227–236

  3. Finogeev AG, Parygin DS, Finogeev AA (2017) The convergence computing model for big sensor data mining and knowledge discovery. In: Human-centric computing and information sciences 2017

  4. Baidu.: Apollo. https://github.com/ApolloAuto/apollo

  5. Somerville H, Lienert P, Sage A. https://www.reuters.com/article/us-uber-selfdriving-sensors-insight/ubers-use-of-fewer-safety-sensors-prompts-questions-after-arizona-crash-idUSKBN1H337Q

  6. NAVER. https://www.naverlabs.com/autonomousDriving(2018)

  7. Google. https://www.google.com/selfdrivingcar/(2018)

  8. Li B, Zhang T, Xia T (2016) Vehicle detection from 3D Lidar using fully convolutional network. In: Computer vision and pattern recognition

  9. Viola P, Jones M (2001) Robust real-time object detection. In: International journal of computer vision

  10. Edgar S, Jean-Bernard H (2017) Probabilistic global scale estimation for MonoSLAM based on generic object detection. In: Workshop on visual odometry, computer vision and pattern recognition 2017

  11. Redmon J. DARKNET. https://pjreddie.com/darknet/yolo

  12. Redmon J, Farhadi A (2016) YOLO9000: better, faster, stronger. In: Computer vision and pattern recognition

  13. Galvez D, Opez L, Salas M, Tardós JD, Montiel J (2016) Real-time monocular object slam. Robot Auton Syst 75:435–449

    Article  Google Scholar 

  14. Salas Moreno R, Newcombe R, Strasdat H, Kelly P, Davison A (2013) Simultaneous localization and mapping at the level of objects. In: Computer vision and pattern recognition

  15. Fu Y, Wang C (2018) Moving object localization based on UHF RFID phase and laser clustering. Sensors 18:825

    Article  Google Scholar 

  16. Zhong Z (2018) Camera radar fusion for increased reliability in ADAS applications. Soc Imaging Sci Technol 2018:258–262

    Google Scholar 

  17. Liang M, Yang B, Wang S, Urtasun R (2018) Deep continuous fusion for multi-sensor 3D object detection. In: ECCV, pp 641–656

  18. Thakur R (2016) Scanning LIDAR in advanced driver assistance systems and beyond: building a road map for next-generation LIDAR technology. IEEE Consum 5:48–54

    Article  Google Scholar 

  19. Munir A (2017) Safety assessment and design of dependable cybercars: for today and the future. IEEE Consum 6:69–77

    Article  Google Scholar 

  20. Francesco P, Cristian Z, Andrea N, Piero O, Sergio S (2018) Is consumer electronics redesigning our cars? Challenges of integrated technologies for sensing, computing, and storage. IEEE Consum Electron Mag 7:8–17

    Google Scholar 

  21. Scaramuzza DF (2009) Absolute scale in structure from motion from a single vehicle mounted camera by exploiting nonholonomic constraints. In: International Conference of Computer Vision

  22. Davison AJ (2003) Real-time simultaneous localization and mapping with a single camera. In: International Conference of Computer Vision, France

  23. Kim J (2018) Multi-camera based local position estimation for moving objects detection. In: BigComp 2018, Shanghai, China

  24. Park M, Lee S, Han W (2015) Development of steering control system for autonomous vehicle using geometry-based path tracking algorithm. ETRI J 37:617–625

    Article  Google Scholar 

  25. Noh S et al (2015) Co-pilot agent for vehicle/driver cooperative and autonomous driving. ETRI J 37:1032–1043

    Article  Google Scholar 

  26. https://github.com/UDACITY/self-driving-car/tree/master/annotations

  27. NVIDIA, CUDA Technology. http://www.nvidia.com/CUDAS

Download references

Acknowledgements

This work was supported by Institute for Information and communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2016-0-00004, Development of Driving Computing System Supporting Real-time Sensor Fusion Processing for Self-Driving Car).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jinwoo Kim.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kim, J., Choi, Y., Park, M. et al. Multi-sensor-based detection and tracking of moving objects for relative position estimation in autonomous driving conditions. J Supercomput 76, 8225–8247 (2020). https://doi.org/10.1007/s11227-019-02811-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-019-02811-y

Keywords

Navigation