Skip to main content

Online Multi-object Tracking-by-Clustering for Intelligent Transportation System with Neuromorphic Vision Sensor

  • Conference paper
  • First Online:
KI 2017: Advances in Artificial Intelligence (KI 2017)

Abstract

Instead of wastefully sending entire images at fixed frame rates, neuromorphic vision sensors only transmits the local pixel-level changes caused by movement in a scene at the time they occur. This results in a stream of events, with a latency in the order of micro-seconds. While these sensors offer tremendous advantages in terms of latency and bandwidth, they require new, adapted approaches to computer vision, due to their unique event-based pixel-level output. In this contribution, we propose an online multi-target tracking system utilizing for neuromorphic vision sensors, which is the first neuromorphic vision system in intelligent transportation systems. In order to track moving targets, a fast and simple object detection algorithm using clustering techniques is developed. To make full use of the low latency, we integrate an online tracking-by-clustering system running at a high frame rate, which far exceeds the real-time capabilities of traditional frame based industry cameras. The performance of the system is evaluated using real world dynamic vision sensor data of a highway bridge scenario. We hope that our attempt will motivate further research on neuromorphic vision sensors for intelligent transportation systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://motchallenge.net.

References

  1. Bewley, A., Ge, Z., Ott, L., Ramos, F., Upcroft, B.: Simple online and realtime tracking. CoRR (2016). http://arxiv.org/abs/1602.00763

  2. Biresaw, T.A., Nawaz, T., Ferryman, J., Dell, A.I.: Vitbat: Video tracking and behavior annotation tool. In: 2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). pp. 295–301 (August 2016)

    Google Scholar 

  3. Brandli, C., Berner, R., Yang, M., Liu, S.C., Delbruck, T.: A 240 \(\times \) 180 130 db 3 \(\mu \)s latency global shutter spatiotemporal vision sensor. IEEE J. Solid-State Circuits 49(10), 2333–2341 (2014)

    Article  Google Scholar 

  4. Chen, G., Zhang, F., Clarke, D., Knoll, A.: Learning to track multi-target online by boosting and scene layout. In: 2013 12th International Conference on Machine Learning and Applications, vol. 1, pp. 197–202 (December 2013)

    Google Scholar 

  5. Comaniciu, D., Meer, P.: Mean shift: a robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 24(5), 603–619 (2002)

    Article  Google Scholar 

  6. Conradt, J.: On-board real-time optic-flow for miniature event-based vision sensors. In: 2015 IEEE International Conference on Robotics and Biomimetics, ROBIO, pp. 1858–1863 (2015). http://dx.doi.org/10.1109/ROBIO.2015.7419043

  7. Datondji, S.R.E., Dupuis, Y., Subirats, P., Vasseur, P.: A survey of vision-based traffic monitoring of road intersections. IEEE Trans. Intell. Transport. Syst. 17(10), 2681–2698 (2016)

    Article  Google Scholar 

  8. Ertöz, L., Steinbach, M., Kumar, V.: Finding clusters of different sizes, shapes, and densities in noisy, high dimensional data. In: Proceedings of Second SIAM International Conference on Data Mining (2003)

    Google Scholar 

  9. Ester, M., Kriegel, H.P., Sander, J., Xu, X.: A density-based algorithm for discovering clusters a density-based algorithm for discovering clusters in large spatial databases with noise. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, KDD 1996, pp. 226–231. AAAI Press (1996). http://dl.acm.org/citation.cfm?id=3001460.3001507

  10. Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The pascal visual object classes (voc) challenge. Int. J. Comput. Vis. 88(2), 303–338 (2010)

    Article  Google Scholar 

  11. Gallego, G., Lund, J.E.A., Mueggler, E., Rebecq, H., Delbrück, T., Scaramuzza, D.: Event-based, 6-dof camera tracking for high-speed applications. CoRR (2016). http://arxiv.org/abs/1607.03468

  12. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition. pp. 3354–3361 (June 2012)

    Google Scholar 

  13. Lagorce, X., Meyer, C., Ieng, S.H., Filliat, D., Benosman, R.: Asynchronous event-based multikernel algorithm for high-speed visual features tracking. IEEE Trans. Neural Netw. Learn. Syst. 26(8), 1710–1720 (2015)

    Article  MathSciNet  Google Scholar 

  14. Leal-Taixé, L., Milan, A., Reid, I.D., Roth, S., Schindler, K.: Motchallenge 2015: Towards a benchmark for multi-target tracking. CoRR (2015). http://arxiv.org/abs/1504.01942

  15. Lichtsteiner, P., Posch, C., Delbruck, T.: A 128 \(\times \) 128 120 db 15 \(\mu \)s latency asynchronous temporal contrast vision sensor. IEEE J. Solid-State Circuits 43(2), 566–576 (2008)

    Article  Google Scholar 

Download references

Acknowledgments

The research leading to these results has received funding from the European Unions Horizon 2020 Research and Innovation Program under Grant Agreement No. 720270 (HBP SGA1).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guang Chen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Hinz, G. et al. (2017). Online Multi-object Tracking-by-Clustering for Intelligent Transportation System with Neuromorphic Vision Sensor. In: Kern-Isberner, G., Fürnkranz, J., Thimm, M. (eds) KI 2017: Advances in Artificial Intelligence. KI 2017. Lecture Notes in Computer Science(), vol 10505. Springer, Cham. https://doi.org/10.1007/978-3-319-67190-1_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-67190-1_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-67189-5

  • Online ISBN: 978-3-319-67190-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics