Skip to content
Licensed Unlicensed Requires Authentication Published by De Gruyter (O) March 11, 2022

Dynamische visuelle Passermarken

Dynamic visual fiducial markers
  • Raul Acuna

    Dr.-Ing. Raul Acuna hat Elektrotechnik und Mechatronik an der Simón Bolivar Universität studiert und an der TU Darmstadt im Fachgebiet Regelungsmethoden und Robotik promoviert. Momentan arbeitet er als Computer Vision Engineer bei Hyundai MOBIS, Hauptarbeitsgebiete: Maschinelles Sehen, Kamera Kalibrierung, Mobile Robotik.

    and Volker Willert

    Prof. Dr.-Ing. Volker Willert ist Professor für maschinelles Sehen der Fakultät Elektrotechnik der Hochschule für angewandte Wissenschaften Würzburg-Schweinfurt. Hauptarbeitsgebiete: Maschinelles Sehen, Mobile Robotik, Maschinelles Lernen, Multi-Agenten-Systeme.

    EMAIL logo

Zusammenfassung

In diesem Beitrag wird ein Design für eine visuelle Passermarke zur Posenschätzung von Kameras vorgestellt, die sogenannte Dynamische Passermarke. Die Passermarke wird auf einem Anzeigegerät dargestellt und passt ihr Aussehen an die räumlichen und zeitlichen Anforderungen der visuellen Wahrnehmungsaufgabe an. Dies ist vor allem für mobile Roboter von Vorteil, die eine Kamera als Sensor verwenden. Es wird eine Regelung entworfen, die das Aussehen der Passermarke in Abhängigkeit der aktuellen Pose der Kamera verändert, um den Erfassungsbereich der Kamera für die Posenschätzung zu vergrößern und eine höhere Genauigkeit der Posenschätzung im Vergleich zu herkömmlichen passiven Passermarken zu erreichen.

Abstract

This paper introduces a visual fiducial marker design, the dynamic fiducial marker for machine vision applications. With this new fiducial, the estimation of the pose of a calibrated camera can be improved via the dynamic adjustment of the fiducial displayed on a display device. The dynamic fiducial marker can change its appearance according to the spatiotemporal requirements of the visual perception task, especially in the case of a mobile robot using a camera as a sensor. We present a feedback control scheme that changes the appearance of the marker depending on the current pose of the camera, in order to increase the range of detection for pose estimation and to to achieve higher accuracy of pose estimation compared to common passive fiducial markers.


Dieser Artikel ist dem 60. Geburtstag von Prof. Dr.-Ing. Jürgen Adamy gewidmet.


Über die Autoren

Dr.-Ing. Raul Acuna

Dr.-Ing. Raul Acuna hat Elektrotechnik und Mechatronik an der Simón Bolivar Universität studiert und an der TU Darmstadt im Fachgebiet Regelungsmethoden und Robotik promoviert. Momentan arbeitet er als Computer Vision Engineer bei Hyundai MOBIS, Hauptarbeitsgebiete: Maschinelles Sehen, Kamera Kalibrierung, Mobile Robotik.

Prof. Dr.-Ing. Volker Willert

Prof. Dr.-Ing. Volker Willert ist Professor für maschinelles Sehen der Fakultät Elektrotechnik der Hochschule für angewandte Wissenschaften Würzburg-Schweinfurt. Hauptarbeitsgebiete: Maschinelles Sehen, Mobile Robotik, Maschinelles Lernen, Multi-Agenten-Systeme.

Literatur

1. Acuna, R. and V. Willert. 2018. Dynamic Markers: UAV landing proof of concept. In: Proc. of IEEE Latin American Robotics Symposium, pp. 496–502.10.1109/LARS/SBR/WRE.2018.00093Search in Google Scholar

2. Acuna, R., R. Ziegler and V. Willert. 2018. Single pose camera calibration using a curved display screen. In: Forum Bildverarbeitung, pp. 25–36.Search in Google Scholar

3. Acuna, R. and V. Willert. 2018. Insights into the robustness of control point configurations for homography and planar pose estimation, arXiv:1803.03025.Search in Google Scholar

4. Atcheson, B., F. Heide and W. Heidrich. 2010. CALTag: High precision fiducial markers for camera calibration. In: Vision, Modeling, and Visualization. The Eurographics Association.Search in Google Scholar

5. Bergamasco, F., A. Albarelli, E. Rodola and A. Torsello. 2011. RUNE-Tag: A high accuracy fiducial marker with strong occlusion resilience. In: Int. Conf. on Computer Vision and Pattern Recognition, pp. 113–120.10.1109/CVPR.2011.5995544Search in Google Scholar

6. Bergamasco, F., A. Albarelli and A. Torsello. 2013. Pi-Tag: A fast image-space marker design based on projective invariants. Machine Vision and Applications 24(6): 1295–1310.10.1007/s00138-012-0469-6Search in Google Scholar

7. Böge, T. A. C., N. Vahrenkamp and R. Dillmann. 2012. Visual Servoing für ein- und zweiarmige Manipulationsaufgaben bei humanoiden Robotern. at Automatisierungstechnik 60(5): 309–317.10.1524/auto.2012.0998Search in Google Scholar

8. Calvet, L., P. Gurdjos and V. Charvillat. 2012. Camera tracking using concentric circle markers: Paradigms and algorithms. In: Int. Conf. on Image Processing, pp. 1361–1364.10.1109/ICIP.2012.6467121Search in Google Scholar

9. Chaumette, S. H. F. and S. Hutchinson. 2006. Visual servo control. I. basic approaches. IEEE Robotics & Automation Magazine 13(4): 82–90.10.1109/MRA.2006.250573Search in Google Scholar

10. Cho, Y., J. Lee and U. Neumann. 1998. A multi-ring color fiducial system and an intensity-invariant detection method for scalable fiducial-tracking augmented reality. In: IWAR, pp. 147–165.Search in Google Scholar

11. Collins, T. and A. Bartoli. 2014. Infinitesimal plane-based pose estimation. International Journal of Computer Vision 109(3): 252–286.10.1007/s11263-014-0725-5Search in Google Scholar

12. Degol, J., T. Bretl and D. Hoiem. 2017. Chromatag: A colored marker and fast detection algorithm. In: IEEE Int. Conf. on Computer Vision, pp. 1472–1481.10.1109/ICCV.2017.164Search in Google Scholar

13. Fiala, M. 2005. ARTag, a fiducial marker system using digital techniques. In: IEEE Int. Conf. on Computer Vision and Pattern Recognition, 2, pp. 590–596.Search in Google Scholar

14. Fitzgibbon, A., M. Pilu and R. Fisher. 1999. Direct least square fitting of ellipses. IEEE Trans. PAMI 21:476–480.10.1109/ICPR.1996.546029Search in Google Scholar

15. Garrido-Jurado, S., R. Muñoz-Salinas, F. J. Madrid- Cuevas and M. J. Marín-Jiménez. 2014. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognition 47(6): 2280–2292.10.1016/j.patcog.2014.01.005Search in Google Scholar

16. Gatrell, L. B., W. A. Hoff and C. W. Sklair. 1992. Robust image features: Concentric contrasting circles and their image extraction. In: SPIE, Cooperative Intelligent Robotics in Space, 2, pp. 235–244.10.1117/12.56761Search in Google Scholar

17. Gruna, R. and J. Beyerer. 2010. On scene-adapted illumination techniques for industrial inspection. In: IEEE Conf. on Instrumentation & Measurement Technology.10.1109/IMTC.2010.5488093Search in Google Scholar

18. Huang, L., Q. Zhang and A. Asundi. 2013. Camera calibration with active phase target: improvement on feature detection and optimization. Optics Letters 38(9): 1446–1448.10.1364/OL.38.001446Search in Google Scholar PubMed

19. Jimenez, V. 2019. Design of a dynamic fiducial marker for optimal pose estimation. Bachelor thesis, TU Darmstadt.Search in Google Scholar

20. Krajník, T., M. Nitsche, J. Faigl, T. Duckett, M. Mejail and L. Preucil. 2013. External localization system for mobile robotics. In: Int. Conf. on Advanced Robotics.10.1109/ICAR.2013.6766520Search in Google Scholar

21. Krogius, M., A. Haggenmiller and E. Olson. 2019. Flexible layouts for fiducial tags. In: IEEE Int. Conf. on Intelligent Robots and Systems, pp. 1898–1903.10.1109/IROS40897.2019.8967787Search in Google Scholar

22. Lepetit, V., F. Moreno-Noguer and P. Fua. 2008. EPnP: An accurate O(n) solution to the PnP problem. International Journal of Computer Vision 81(2): 155.10.1007/s11263-008-0152-6Search in Google Scholar

23. Li, W., T. Bothe, W. Osten and M. Kalms. 2004. Object adapted pattern projection. Part I: generation of inverse patterns. Optics and Lasers in Engineering 41.10.1016/S0143-8166(02)00116-1Search in Google Scholar

24. Lightbody, P., T. Krajník and M. Hanheide. 2017. A Versatile high-performance visual fiducial marker detection system with scalable identity encoding. In: Symposium on Applied Computing, pp. 276–282.10.1145/3019612.3019709Search in Google Scholar

25. López De Ipiña, D., P. R. Mendonça and A. Hopper. 2002. TRIP: A low-cost vision-based location system for ubiquitous computing. Personal and Ubiquitous Computing 6(3): 206–219.10.1007/s007790200020Search in Google Scholar

26. Ma, Yi, et al., 2004. An invitation to 3D vision: from images to geometric models. Springer, New York.10.1007/978-0-387-21779-6Search in Google Scholar

27. Mallon, J. and P. F. Whelan. 2007. Which pattern? Biasing aspects of planar calibration patterns and detection methods. Pattern Recognition Letters 28(8): 921–930.10.1016/j.patrec.2006.12.008Search in Google Scholar

28. Mangelson, J. G., R. W. Wolcott, P. Ozog and R. M. Eustice. 2016. Robust visual fiducials for skin-to-skin relative ship pose estimation. In: MTS/IEEE OCEANS.10.1109/OCEANS.2016.7761168Search in Google Scholar

29. Naimark, L. and E. Foxlin. 2002. Circular data matrix fiducial system and robust image processing for a wearable vision-inertial self-tracker. In: International Symposium on Mixed and Augmented Reality, pp. 27–36.10.1109/ISMAR.2002.1115065Search in Google Scholar

30. Olson, E. 2011. AprilTag: A robust and fexible visual fiducial system. In: IEEE Int. Conf. on Robotics and Automation, pp. 3400–3407.10.1109/ICRA.2011.5979561Search in Google Scholar

31. Poupyrev, I., H. Kato and M. Billinghurst. 2000. ARToolkit user manual. Technical report, University of Washington.Search in Google Scholar

32. Rekimoto, J. 1998. Matrix: A realtime object identification and registration method for augmented reality. In: IEEE Asia Pacific Conference on Computer Human Interaction, pp. 63–69.Search in Google Scholar

33. Romero Ramirez, F., R. Muñoz-Salinas and R. Medina-Carnicer. 2019. Fractal Markers: a new approach for long- range marker pose estimation under occlusion. IEEE Access 7:169908–169919.10.1109/ACCESS.2019.2951204Search in Google Scholar

34. Rudakova, V. and P. Monasse. 2014. Camera matrix calibration using circular control points and separate correction of the geometric distortion field. In: Conference on Computer and Robot Vision, pp. 195–202.10.1109/CRV.2014.34Search in Google Scholar

35. Sattar, J., E. Bourque, P. Giguére and G. Dudek. 2007. Fourier tags: Smoothly degradable fiducial markers for use in human-robot interaction. In: Canadian Conference on Computer and Robot Vision, pp. 165–171.10.1109/CRV.2007.34Search in Google Scholar

36. Wache, M. 2017. Ellipse detection analysis and usage in a dynamic fiducial marker. Bachelor thesis, TU Darmstadt.Search in Google Scholar

37. Wagner, D. and D. Schmalstieg. 2007. ARToolKitPlus for pose tracking on mobile devices. In: Computer Vision Winter Workshop, Austria.Search in Google Scholar

38. Wang, H., X. Wang, G. Lu and Y. Zhong. 2015. HArCo: Hierarchical fiducial markers for pose estimation in helicopter landing tasks. In: IEEE Int. Conf. on Systems, Man, and Cybernetics, pp. 1968–1973.10.1109/SMC.2015.343Search in Google Scholar

39. Willert, V. and J. Eggert. 2009. A stochastic dynamical system for optical flow estimation. In: IEEE 12th Int. Conf. on Computer Vision Workshops, pp. 711–718.10.1109/ICCVW.2009.5457632Search in Google Scholar

40. Willert, V., M. Toussaint, J. Eggert and E. Körner. 2007. Uncertainty optimization for robust dynamic optical flow estimation. In: Int. Conf. on Machine Learning and Applications, pp. 450–457.10.1109/ICMLA.2007.15Search in Google Scholar

Erhalten: 2021-10-13
Angenommen: 2022-01-26
Online erschienen: 2022-03-11
Erschienen im Druck: 2022-03-28

© 2022 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 27.4.2024 from https://www.degruyter.com/document/doi/10.1515/auto-2021-0144/html
Scroll to top button