Skip to main content

FidMark: A Fiducial Marker Ontology for Semantically Describing Visual Markers

  • Conference paper
  • First Online:
The Semantic Web (ESWC 2024)

Abstract

Fiducial markers are visual objects that can be placed in the field of view of an imaging sensor to determine its position and orientation, and subsequently the scale and position of other objects within the same field of view. They are used in a wide variety of applications ranging from medical applications to augmented reality (AR) solutions where they are applied to determine the location of an AR headset. Despite the wide range of different marker types with their advantages for specific use cases, there exists no standard to decide which marker to best use in which situation. This leads to proprietary AR solutions that rely on a predefined set of marker and pose detection algorithms, preventing interoperability between AR applications. We propose the FidMark fiducial marker ontology, classifying and describing the different markers available for computer vision and augmented reality along with their spatial position and orientation. Our proposed ontology also describes the procedures required to perform pose estimation, and marker detection to allow the description of algorithms used to perform these procedures. With FidMark we aim to enable future AR solutions to semantically describe markers within an environment so that third-party applications can utilise this information.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    For more information about the ontology profile and ontology statistics, please check the online documentation.

  2. 2.

    https://openhps.github.io/FidMark/1.0/en/.

  3. 3.

    https://github.com/damianofalcioni/js-aruco2.

  4. 4.

    https://openhps.github.io/FidMark/application/.

  5. 5.

    https://github.com/OpenHPS/FidMark/blob/main/examples/virtual_objects.ttl.

References

  1. Barone Rodrigues, A., Dias, D.R.C., Martins, V.F., Bressan, P.A., de Paiva Guimarães, M.: WebAR: a web-augmented reality-based authoring tool with experience API support for educational applications. In: Antona, M., Stephanidis, C. (eds.) UAHCI 2017. LNCS, vol. 10278, pp. 118–128. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-58703-5_9

    Chapter  Google Scholar 

  2. Battle, R., Kolas, D.: GeoSPARQL: enabling a geospatial semantic web. Semant. Web J. 3(4) (2011)

    Google Scholar 

  3. Bermudez, F.F., Diaz, C.S., Ward, S., Radkowski, R., Garrett, T., Oliver, J.: Comparison of natural feature descriptors for rigid-object tracking for real-time augmented reality. In: Proceedings of the 34th Computers and Information in Engineering Conference, Buffalo, USA (2014). https://doi.org/10.1115/DETC2014-35319

  4. Bradski, G.: The OpenCV library. Dr. Dobb’s J. Softw. Tools Prof. Programmer 25(11) (2000)

    Google Scholar 

  5. Calvet, L., Gurdjos, P., Griwodz, C., Gasparini, S.: Detection and accurate localization of circular fiducials under highly challenging conditions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016) (2016). https://openaccess.thecvf.com/content_cvpr_2016/html/Calvet_Detection_and_Accurate_CVPR_2016_paper.html

  6. Collins, T., Bartoli, A.: Infinitesimal plane-based pose estimation. Int. J. Comput. Vis. 109(3) (2014). https://doi.org/10.1007/s11263-014-0725-5

  7. Díaz, A., Caicedo, S., Caicedo, E.: Augmented reality without fiducial markers. In: Proceedings of the 20th Symposium on Signal Processing, Images and Computer Vision (STSIVA 2015) (2015). https://doi.org/10.1109/STSIVA.2015.7330431

  8. Gao, X.S., Hou, X.R., Tang, J., Cheng, H.F.: Complete solution classification for the perspective-three-point problem. IEEE Trans. Pattern Anal. Mach. Intell. 25(8) (2003). https://doi.org/10.1109/TPAMI.2003.1217599

  9. Garrido-Jurado, S., Muñoz-Salinas, R., Madrid-Cuevas, F.J., Marín-Jiménez, M.J.: Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recogn. 47(6) (2014). https://doi.org/10.1016/j.patcog.2014.01.005

  10. Gsaxner, C., Li, J., Pepe, A., Schmalstieg, D., Egger, J.: Inside-out instrument tracking for surgical navigation in augmented reality. In: Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology (VRST 2021) (2021). https://doi.org/10.1145/3489849.3489863

  11. Guha, R.V., Brickley, D., Macbeth, S.: Schema.org: evolution of structured data on the web. Commun. ACM 59(2) (2016). https://doi.org/10.1145/2844544

  12. Gyrard, A., Datta, S.K., Bonnet, C., Boudaoud, K.: Cross-domain internet of things application development: M3 framework and evaluation. In: Proceedings of the 3rd International Conference on Future Internet of Things and Cloud (FiCloud 2015) (2015). https://doi.org/10.1109/FiCloud.2015.10

  13. Hogan, A.: Shape constraints and expressions. In: Hogan, A. (ed.) The Web of Data, pp. 449–513. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-51580-5_7

    Chapter  Google Scholar 

  14. ISO Central Secretary: Photography – Archiving Systems: Vocabulary. Standard ISO 19262:2015, International Organization for Standardization (2015). https://www.iso.org/standard/64219.html

  15. ISO Central Secretary: Geographic information – Imagery Sensor Models for Geopositioning. Standard ISO 19130-1:2018, International Organization for Standardization (2018). https://www.iso.org/standard/66847.html

  16. ISO Central Secretary: Geographic Information – Referencing by Coordinates. Standard ISO 19111:2019, International Organization for Standardization (2019). https://www.iso.org/standard/74039.html

  17. ISO Central Secretary: Information technology – Automatic Identification and Data Capture Techniques. Standard ISO 18004:2015, International Organization for Standardization (2021). https://www.iso.org/standard/62021.html

  18. Janowicz, K., Haller, A., Cox, S.J., Le Phuoc, D., Lefrançois, M.: SOSA: a lightweight ontology for sensors, observations, samples, and actuators. J. Web Semant. 56 (2019). https://doi.org/10.1016/j.websem.2018.06.003

  19. Kahn, C.E., Jr., Langlotz, C.P., Channin, D.S., Rubin, D.L.: Informatics in radiology: an information model of the DICOM standard. Radiographics 31(1) (2011). https://doi.org/10.1148/rg.311105085

  20. Kalaitzakis, M., Cain, B., Carroll, S., Ambrosi, A., Whitehead, C., Vitzilaios, N.: Fiducial markers for pose estimation: overview, applications and experimental comparison of the ARTag, AprilTag, ArUco and STag Markers. J. Intell. Robot. Syst. 101 (2021). https://doi.org/10.1007/s10846-020-01307-9

  21. Kaltenbrunner, M., Bencina, R.: reacTIVision: a computer-vision framework for table-based tangible interaction. In: Proceedings of the 1st International Conference on Tangible and Embedded interaction (TEI 2007), Baton Rouge, USA (2007). https://doi.org/10.1145/1226969.1226983

  22. Ke, T., Roumeliotis, S.I.: An efficient algebraic solution to the perspective-three-point problem. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017) (2017)

    Google Scholar 

  23. Košt’ák, M., Slabỳ, A.: Designing a simple fiducial marker for localization in spatial scenes using neural networks. Sensors 21(16) (2021). https://doi.org/10.3390/s21165407

  24. Lepetit, V., Moreno-Noguer, F., Fua, P.: EPnP: an accurate O(n) solution to the PnP problem. Int. J. Comput. Vis. 81 (2009). https://doi.org/10.1007/s11263-008-0152-6

  25. Marchand, E., Uchiyama, H., Spindler, F.: Pose estimation for augmented reality: a hands-on survey. IEEE Trans. Vis. Comput. Graph. 22(12), 2633–2651 (2015). https://doi.org/10.1109/TVCG.2015.2513408

    Article  Google Scholar 

  26. Mráz, E., Rodina, J., Babinec, A.: Using fiducial markers to improve localization of a drone. In: Proceedings of the 23rd International Symposium on Measurement and Control in Robotics (ISMCR 2020), Budapest, Hungary (2020). https://doi.org/10.1109/ISMCR51255.2020.9263754

  27. Murphy-Chutorian, E., Trivedi, M.M.: Head pose estimation in computer vision: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 31(4) (2008). https://doi.org/10.1109/TPAMI.2008.106

  28. Nöll, T., Pagani, A., Stricker, D.: Markerless camera pose estimation: an overview. In: Proceedings of the Workshop on Visualization of Large and Unstructured Data Sets (VLUDS 2010), Dagstuhl, Germany (2010). https://doi.org/10.4230/OASIcs.VLUDS.2010.45

  29. Odmins, J., Slics, K., Fenuks, R., Linina, E., Osmanis, K., Osmanis, I.: Comparison of passive and active fiducials for optical tracking. Latvian J. Phys. Tech. Sci. 59(5) (2022). https://doi.org/10.2478/lpts-2022-0040

  30. Olson, E.: AprilTag: a robust and flexible visual fiducial system. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2011), Shanghai, China (2011). https://doi.org/10.1109/ICRA.2011.5979561

  31. Poveda-Villalón, M., Fernández-Izquierdo, A., Fernández-López, M., García-Castro, R.: LOT: an industrial oriented ontology engineering framework. Eng. Appl. Artif. Intell. 111 (2022). https://doi.org/10.1016/j.engappai.2022.104755

  32. Poveda-Villalón, M., Gómez-Pérez, A., Suárez-Figueroa, M.C.: OOPS! (OntOlogy Pitfall Scanner!): an on-line tool for ontology evaluation. Int. J. Semant. Web Inf. Syst. (IJSWIS) 10(2) (2014). https://doi.org/10.4018/ijswis.2014040102

  33. Roberts, G.W., et al.: The use of augmented reality, GPS and INS for subsurface data visualization. In: Proceedings of the 22nd International FIG Congress (2002)

    Google Scholar 

  34. Sirin, E., Parsia, B., Grau, B.C., Kalyanpur, A., Katz, Y.: Pellet: a practical OWL-DL reasoner. J. Web Semant. 5(2) (2007). https://doi.org/10.1016/j.websem.2007.03.004

  35. Song, K.T., Chang, Y.C.: Design and implementation of a pose estimation system based on visual fiducial features and multiple cameras. In: Proceedings of the International Automatic Control Conference (CACS 2018), Taoyuan, Taiwan (2018). https://doi.org/10.1109/CACS.2018.8606773

  36. Ulrich, J., Alsayed, A., Arvin, F., Krajník, T.: Towards fast fiducial marker with full 6 DOF pose estimation. In: Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing (SAC 2022), Virtual Event (2022). https://doi.org/10.1145/3477314.3507043

  37. Urtans, E., Nikitenko, A.: Active infrared markers for augmented and virtual reality. Eng. Rural Dev. 9, 10 (2016). https://api.semanticscholar.org/CorpusID:29944026

  38. Wagner, A., Bonduel, M., Pauwels, P., Uwe, R.: Relating geometry descriptions to its derivatives on the web. In: Proceedings of the European Conference on Computing in Construction, Chania, Greece (2019). https://doi.org/10.35490/ec3.2019.146

  39. Van de Wynckel, M., Signer, B.: OpenHPS: an open source hybrid positioning system. Technical report. WISE-2020-01, Vrije Universiteit Brussel (2020). https://doi.org/10.48550/ARXIV.2101.05198

  40. Van de Wynckel, M., Signer, B.: POSO: a generic positioning system ontology. In: Sattler, U., et al. (eds.) ISWC 2022. LNCS, vol. 13489, pp. 231–247. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19433-7_14

    Chapter  Google Scholar 

Download references

Acknowledgements

The research of Isaac Valadez has been funded by a Baekeland mandate of Flanders Innovation & Entrepreneurship (VLAIO, HBC.2020.2881).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maxim Van de Wynckel .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Van de Wynckel, M., Valadez, I., Signer, B. (2024). FidMark: A Fiducial Marker Ontology for Semantically Describing Visual Markers. In: Meroño Peñuela, A., et al. The Semantic Web. ESWC 2024. Lecture Notes in Computer Science, vol 14665. Springer, Cham. https://doi.org/10.1007/978-3-031-60635-9_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-60635-9_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-60634-2

  • Online ISBN: 978-3-031-60635-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics