Skip to main content
Log in

Visual driving assistance system based on few-shot learning

  • Special Issue Paper
  • Published:
Multimedia Systems Aims and scope Submit manuscript

Abstract

With the increase of vehicles and the diversification of road conditions, people pay more attention to the safety of driving. In recent years, autonomous driving technology by Franke et al. (IEEE Intell Syst Their Appl 13(6):40–48, 1998) and unmanned driving technology by Zhang et al. (CAAI Trans Intell Technol 1(1):4–13, 2016) have entered our field of vision. Both automatic driving by Levinson et al. (Towards fully autonomous driving: Systems and algorithms, 2011) and unmanned driving by Im et al. (Unmanned driving of intelligent robotic vehicle, 2009) use a variety of sensors to collect the environment around the vehicle, and use a variety of decision control algorithms to control the vehicle in motion. The visual driving assistance system by Watanabe, et al. (Driving assistance system for appropriately making the driver recognize another vehicle behind or next to present vehicle, 2010), used in conjunction with the target recognition algorithm by Pantofaru et al. (Object recognition by integrating multiple image segmentations, 2008)), will provide drivers with real-time environment around the vehicle. In recent years, few-shot learning by Li et al. (Comput Electron Agric 2:2, 2020) has become a new direction of target recognition algorithm, which reduces the difficulty of collecting training samples. In this paper, on one hand, several low-light cameras with fish-eye lenses are used to collect and reconstruct the environment around the vehicle. On the other hand, we use infrared camera and lidar to collect the environment in front of the vehicle. Then, we use the method of few-shot learning to identify vehicles and pedestrians in the forward-view image. In addition, we develop the system on embedded devices according to miniaturization requirements. In conclusion, the system will adapt to the needs of most drivers at this stage, and will effectively cooperate with the development of automatic driving and unmanned driving.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Franke, U., Gavrila, D., Gorzig, S., et al.: Autonomous driving goes downtown. IEEE Intell. Syst. Their Appl. 13(6), 40–48 (1998)

    Article  Google Scholar 

  2. Zhang, X., Gao, H., Guo, M., et al.: A study on key technologies of unmanned driving. CAAI Trans. Intell. Technol. 1(1), 4–13 (2016)

    Article  Google Scholar 

  3. Levinson, J., Askeland, J., Becker, J., et al.: Towards fully autonomous driving: Systems and algorithms. (2011)

  4. Im, D.Y., Ryoo, Y.J., Kim, D.Y., et al.: Unmanned driving of intelligent robotic vehicle. Isis Symposium on Advanced Intelligent Systems, (2009)

  5. Watanabe, T., Oshida, K., Matsumoto, Y., et al.: Driving assistance system for appropriately making the driver recognize another vehicle behind or next to present vehicle. (2010)

  6. Pantofaru, C., Schmid, C., Hebert, M.: Object recognition by integrating multiple image segmentations. (2008)

  7. Li, Y., Yang, J.: Few-shot cotton pest recognition and terminal realization. Comput. Electron. Agric. 2, 2 (2020)

    Google Scholar 

  8. Martinez, E., Diaz, M., Melenchon, J., et al.: Driving assistance system based on the detection of head-on collisions. (2008)

  9. Sung, F., Yang, Y., Zhang, L., et al.: Learning to compare: Relation network for few-shot learning. (2017)

  10. Bahl, P., Padmanabhan, V.N.: RADAR: an in-building RF-based user location and tracking system. Infocom Nineteenth Joint Conference of the IEEE Computer & Communications Societies IEEE, (2000)

  11. Gonzalez, R.C., Woods, R.E.: Digital image processing. Prentice Hall. International 28(4), 484–486 (2008)

    Google Scholar 

  12. Chen, X.: Reversing radar system based on CAN bus. International Conference on Industrial Mechatronics & Automation, (2009)

  13. Yang, Z.L., Guo, B.L.: Image mosaic based on SIFT. 4th International Conference on Intelligent Information Hiding and Multimedia Signal Processing, (2008)

  14. Projection M: Cylindrical projection. Dictionary Geotechnical Engineering/wörterbuch. Geotechnik 329, 2 (2014)

    Google Scholar 

  15. Papadakis, P., Pratikakis, I., Perantonis, S., et al.: Efficient 3D shape matching and retrieval using a concrete radialized spherical projection representation. Pattern Recogn. 40(9), 2437–2452 (2007)

    Article  MATH  Google Scholar 

  16. Kannala, Juho., Brandt, et al.: A Generic Camera Model and Calibration Method for Conventional, Wide-Angle, and Fish-Eye Lenses. IEEE Transactions on Pattern Analysis & Machine Intelligence, (2006)

  17. Cooper, K.B., Dengler, R.J., Llombart, N., et al.: Penetrating 3-D imaging at 4- and 25-m range using a submillimeter-wave radar. IEEE Trans. Microwave Theory Tech. 56(12), 2771–2778 (2008)

    Article  Google Scholar 

  18. Lim, K., Treitz, P., Wulder, M., et al.: LiDAR remote sensing of forest structure. Prog. Phys. Geogr. 27(1), 88–106 (2003)

    Article  Google Scholar 

  19. Wen, J., Yang, J., Jiang, B., Song, H., Wang, H.: Big data driven marine environment information forecasting: A time series prediction network. IEEE Trans. Fuzzy Syst. 29(1), 4–18 (2021)

    Article  Google Scholar 

  20. Poulton, C.V., Ami, Y., Cole, D.B., et al.: Coherent solid-state LIDAR with silicon photonic optical phased arrays. Opt. Lett. 42(20), 4091–4094 (2017)

    Article  Google Scholar 

  21. Yang, J., Zhang, J., and Wang, H.: Urban Traffic Control in Software Defined Internet of Things via a Multi-Agent Deep Reinforcement Learning Approach. IEEE Transactions on Intelligent Transportation Systems, https://doi.org/10.1109/TITS.2020.3023788

  22. Han, F.: A two-stage approach to peaple and vehicle detection with HOG-based SVM. Permis, (2006)

  23. Jiafu, J., Hui, X.: Fast Pedestrian Detection Based on HOG-PCA and Gentle AdaBoost. (2012)

  24. Jiachen, Y., Yang, Z., Jiacheng, L., Bin, J., Wen, L., Xinbo, G.: No Reference Quality Assessment for Screen Content Images Using Stacked Auto-encoders in Pictorial and Textual Regions. IEEE Transactions on Cybernetics,early access

  25. Girshick, R., et al.: Rich feature hierarchies for accurate object detection and semantic segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition, (2014)

  26. Girshick, R.: Fast r-cnn. Proceedings of the IEEE International Conference on Computer Vision. (2015)

  27. Ren, S., He, K., Girshick, R., et al.: Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017)

    Article  Google Scholar 

  28. Liu, W., Anguelov, D., Erhan, D., et al.: SSD: Single Shot MultiBox Detector. (2016)

  29. Redmon, J., Divvala, S., Girshick, R., et al.: You only look once: Unified, real-time object detection. (2015)

  30. Ravi, S., Larochelle, H.: Optimization as a model for few-shot learning. (2016)

  31. Santoro, A., Bartunov, S., Botvinick, M., et al.: One-shot learning with memory-augmented neural networks. arXiv preprint arXiv:1605.06065, (2016)

  32. Li, Y., Yang, J.: Meta-learning baselines and database for few-shot classification in agriculture. Comput. Electron. Agric. 2, 2 (2021)

    Google Scholar 

  33. Jedrasiak, K., Nawrat, A.: The Comparison of Capabilities of Low Light Camera, Thermal Imaging Camera and Depth Map Camera for Night Time Surveillance Applications. (2013)

  34. Wu, J., Zhao, F., Zhang, X.: Infrared camera. (2006)

  35. Killinger, D.K., Chan, K.P.: Solid-state lidar measurements at 1 and 2 um. Optics, Electro-optics, & Laser Applications in Science & Engineering. International Society for Optics and Photonics, (1991)

  36. de Huang, H.: Research on panoramic digital video mosaic algorithm. Appl. Mech. Mater. 71–78, 3967–3970 (2011)

    Article  Google Scholar 

  37. Zhou, W., Liu, Y., Lyu, C., et al.: Real-time implementation of panoramic mosaic camera based on FPGA. 2016 IEEE International Conference on Real-time Computing and Robotics (RCAR). IEEE, (2016)

  38. Liang, L., Xiao, X., Jia, Y., et al.: NON-OVERLAP REGION BASED AUTOMATIC GLOBAL ALIGNMENT FOR RING CAMERA IMAGE MOSAIC. (2008)

  39. Yu, W., Chung, Y., Soh, J.: Vignetting Distortion Correction Method for High Quality Digital Imaging. Proceedings of the 17th International Conference on Pattern Recognition, (2004)

  40. Robert, A., Kruger, et al.: Light equalization radiography. Medical Physics, (1998)

  41. Ali, W., Abdelkarim, S., Zahran, M., et al.: YOLO3D: End-to-end real-time 3D Oriented Object Bounding Box Detection from LiDAR Point Cloud. (2018)

  42. Redmon, J., Farhadi, A.: YOLO9000: Better, pp. 6517–6525. Stronger, Faster (2017)

  43. Yang, J., Wang, C., Wang, H., et al.: A RGB-D based real-time multiple object detection and ranging system for autonomous driving. IEEE Sens. J. 99, 1–1 (2020)

    Google Scholar 

  44. Redmon, J., Farhadi, A.: YOLOv3: An incremental improvement. arXiv e-prints, (2018)

  45. Piella, G.: A general framework for multiresolution image fusion: From pixels to regions. Inf. Fusion 4(4), 259–280 (2003)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by National Natural Science Foundation of China (No. 61871283), Foundation of Pre-Research on Equipment of China (No. 61400010304), and Major Civil-Military Integration Project in Tianjin, China (No. 18ZXJMTG00170).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shan Liu.

Ethics declarations

Conflict of interest

The authors declared that they have no conflicts of interest to this work. We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, S., Tang, Y., Tian, Y. et al. Visual driving assistance system based on few-shot learning. Multimedia Systems 29, 2853–2863 (2023). https://doi.org/10.1007/s00530-021-00830-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00530-021-00830-5

Keywords

Navigation