Skip to main content
Log in

A real-time low-cost marker-based multiple camera tracking solution for virtual reality applications

  • Special Issue
  • Published:
Journal of Real-Time Image Processing Aims and scope Submit manuscript

Abstract

One of the reasons of how virtual reality applications penetrate industrial production chains slowly is because of significant investment costs to purchase adequate supporting hardware. As a consequence such applications are only available to major companies and fail to benefit the production processes of small and medium enterprises. In this article, we introduce PTrack, a real-time, low-cost, marker-based multiple camera tracking solution for virtual reality providing the accuracy and scalability usually found in much more expensive tracking systems. PTrack is composed of single camera tracking PTrack Units. Each unit is connected to a video camera equipped with infrared strobes and features a novel iterative geometric pose estimation algorithm which does marker-based single camera tracking and is therefore completely autonomous. Multiple PTrack units successively extend the tracking range of the system. For a smooth transition of tracked labels from one camera to another, camera range areas must overlap to form a contiguous tracking space. A PTrack Sensor Fusion Module then computes the pose of a certain label within tracking space and forwards it to interested applications. A universal test setup for optical tracking systems has been built allowing to measure translational and rotational accuracy of PTrack as well as of competing systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Chen, X.: Design of many-camera tracking systems for scalability and efficient resource allocation. PhD thesis, Stanford, CA, USA, adviser-Hanrahan, Patrick M (2002)

  2. DeMenthon, D., Davis, L.S.: Model-based object pose in 25 lines of code. In: ECCV ’92: Proceedings of the Second European Conference on Computer Vision, pp. 335–343. Springer, London, UK (1992)

  3. Dhome, M., Richetin, M., Lapreste, J.T.: Determination of the attitude of 3d objects from a single perspective view. IEEE Trans. Pattern Anal. Mach. Intell. 11(12), 1265–1278 (1989). doi:http://dx.doi.org/10.1109/34.41365

  4. Ding, H., Ji, H., Huang, Z., Li, H.: Data fusion algorithm based on fuzzy logic. In: Fifth World Congress on Intelligent Control and Automation, WCICA 2004, vol. 4, pp. 3101–3103 (2004). doi:10.1109/WCICA.2004.1343091

  5. Dockstader, S., Tekalp, A.: Multiple camera fusion for multi-object tracking. In: Proceedings of the 2001 IEEE Workshop on Multi-Object Tracking, pp. 95–102 (2001). doi:10.1109/MOT.2001.937987

  6. Faceli, K., De Carvalho, A.C.P.L.F., Rezende, S.O.: Combining intelligent techniques for sensor fusion. Appl. Intell. 20(3), 199–213 (2004). doi:http://dx.doi.org/10.1023/B:APIN.0000021413.05467.20

  7. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography, pp. 726–740 (1987)

  8. Intel-Open C.V.: Opencv-intel-open source computer vision library. http://www.intel.com/technology/computing/opencv/index.htm (2008)

  9. Jin, T., Lee, J.: Space and time sensor fusion for mobile robot navigation. In: Proceedings of the 2002 IEEE International Symposium on Industrial Electronics, ISIE 2002, vol. 2, pp. 409–414 (2002). doi:10.1109/ISIE.2002.1026321

  10. Kalman, R.E.: A new approach to linear filtering and prediction problems. Trans. ASME J. Basic Eng. 82(Series D), 35–45. URL http://www.cs.unc.edu/welch/kalman/media/pdf/Kalman1960.pdf (1960)

  11. Kato, H., Billinghurst, M.: Marker tracking and hmd calibration for a video-based augmented reality conferencing system. In: IWAR ’99: Proceedings of the 2nd IEEE and ACM International Workshop on Augmented Reality, IEEE Computer Society, Washington, DC, USA, p. 85 (1999)

  12. Koval, V.: The competitive sensor fusion algorithm for multi sensor systems. In: International Workshop on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications, pp. 65–68 (2001). doi:10.1109/IDAACS.2001.941981

  13. Lowe, D.G.: Fitting parameterized three-dimensional models to images. IEEE Trans. Pattern Anal. Mach. Intell. 13(5), 441–450 (1991). doi:http://dx.doi.org/10.1109/34.134043

    Google Scholar 

  14. Luo, R., Su, K.: A review of high-level multisensor fusion: approaches and applications. In: Proceedings of the 1999 IEEE/SICE/RSJ International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI ’99, pp. 25–31 (1999). doi:10.1109/MFI.1999.815960

  15. Malbezin, P., Piekarski, W., Thomas, B.: Measuring artoolkit accuracy in long distance tracking experiments. In: The First IEEE International Workshop on Augmented Reality Toolkit, 2 pp. (2002). doi:10.1109/ART.2002.1107000

  16. NaturalPoint (2009) Naturalpoint—optitrack motion capture solutions. http://www.naturalpoint.com/optitrack/

  17. Nistér, D.: Preemptive ransac for live structure and motion estimation. In: ICCV ’03: Proceedings of the Ninth IEEE International Conference on Computer Vision, IEEE Computer Society, Washington, DC, USA, p. 199 (2003)

  18. Santos, P.: A 2d to 3d geometric interpolation algorithm for marker-based single-camera tracking. Master’s thesis, Insituto Superior Técnico, Universidade Técnica de Lisboa (2005)

  19. Santos, P., Stork, A., Buaes, A., Jorge, J.: Ptrack: introducing a novel iterative geometric pose estimation for a marker-based single camera tracking system. In: Virtual Reality Conference, pp. 143–150 (2006). doi:10.1109/VR.2006.114

  20. Thomopoulos, S.: Sensor selectivity and intelligent data fusion. In: IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI ’94, pp. 529–537 (1994). doi:10.1109/MFI.1994.398408

  21. Yonemoto, S., Matsumoto, A., Arita, D., Taniguchi, R.I.: A real-time motion capture system with multiple camera fusion. In: Proceedings of the International Conference on Image Analysis and Processing, pp. 600–605 (1999). doi:10.1109/ICIAP.1999.797662

  22. Zhang, Z.: Flexible camera calibration by viewing a plane from unknown orientations. In: The Proceedings of the Seventh IEEE International Conference on Computer Vision, vol. 1, pp. 666–673 (1999). doi:10.1109/ICCV.1999.791289

  23. Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). doi:10.1109/34.888718

    Article  Google Scholar 

Download references

Acknowledgments

The research leading to these results has received funding from the European Commission’s 6th framework programme under grant agreement FP6-IST-2-004785 IMPROVE-Improving Display and Rendering Technologies for Virtual Environments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pedro Carlos Santos.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Santos, P.C., Stork, A., Buaes, A. et al. A real-time low-cost marker-based multiple camera tracking solution for virtual reality applications. J Real-Time Image Proc 5, 121–128 (2010). https://doi.org/10.1007/s11554-009-0138-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11554-009-0138-9

Keywords

Navigation