Skip to main content

Advertisement

Log in

Rapidly constructed appearance models for tracking in augmented reality applications

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

This paper presents a method for rapidly generating crude, appearance-based edge models consisting of a set of planes. The appearance of each plane is modeled using a set of keyframes containing a list of edgels. These models are generated from a short video sequence with a few annotated frames indicating the location of the object of interest. The data requires 3–5 min to collect using a handheld device instrumented with a camera. The models produced are used with an existing edge tracking algorithm modified to select the appropriate edge keyframe and detect occlusion. A framestore is also created containing several views of the object represented as sets of point features. The framestore is used to provide an initial, rough pose estimate for initializing contour tracking. The presented system is used to create an augmented reality application to guide a user through a machine tool setup and a printer maintenance task. The models are shown to be an accurate representation of the object. Additionally, the performance of various aspects of the model making and tracking algorithms are evaluated.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Bleser, G., Wuest, H., Stricker, D.: Online camera pose estimation in partially known and dynamic scenes. In: IEEE Proceedings of the International Symposium on Mixed and Augmented Reality. IEEE, Santa Barbara, CA (2006)

  2. Bunnun, P., Mayol-Cuevas, W.: Outlin AR: An assisted interactive model building system with reduced computational effort. In: IEEE Proceedings of the International Symposium on Mixed and Augmented Reality, pp. 61–64. IEEE, Washington, DC, USA (2008). doi:10.1109/ISMAR.2008.4637325

  3. Canny J.: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell 8(6), 679–698 (1986)

    Article  Google Scholar 

  4. Chekhlov, D., Gee, A., Calway, A., Mayol-Cuevas, W.: Ninja on a plane: automatic discovery of physical planes for augmented reality using visual SLAM. In: IEEE Proceedings of the International Symposium on Mixed and Augmented Reality. IEEE, Nara, Japan (2007)

  5. Drummond T., Cipolla R.: Real-time visual tracking of complex structures. IEEE Trans. Pattern Anal. Mach. Intell 24(7), 932–946 (2002)

    Article  Google Scholar 

  6. Eade, E., Drummond, T.: Edge landmarks in monocular slam. In: British Machine Vision Conference, vol. 1, pp. 7–16, Edinburgh (2006)

  7. Eade, E., Drummond, T.: Scalable monocular slam. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 469–476. IEEE Computer Society, Washington, DC (2006)

  8. Fauvet, B., Bouthemy, P., Gros, P., Spindler, F.: A geometrical key-frame selection method exploiting dominant motion estimation in video. In: International Conference on Image and Video Retrieval. Lecture Notes in Computer Science, vol. 3115, pp. 419–427. Dublin, Eire (2004)

  9. Fischler, M., Bolles, R.: Random sample consensus: a paradigm for model fitting applications to image analysis and automated cartography. In: Proceedings of the Image Understanding Workshop, pp. 71–88 (1980)

  10. Hartley R.I., Zisserman A.: Multiple View Geometry in Computer Visions, 2nd edn. Cambridge University Press, Cambridge (2004)

    Book  Google Scholar 

  11. Hengel, A., Dick, A., Thormahlen, T., Ward, B., Torr, P.: VideoTrace: Rapid interactive scene modelling from video. In: SIGGRAPH. ACM, New York, NY, USA (2007)

  12. Jin H., Favaro P., Soatto S.: A semi-direct approach to structure from motion. Vis. Comput. 19, 377–394 (2003)

    Article  Google Scholar 

  13. Julier, S., Uhlmann, J.: A new extension of the Kalman filter to nonlinear systems. In: International Symposium on Aerospace/Defense Sensing, Simulation and Controls (1997). http://citeseer.ist.psu.edu/julier97new.html

  14. Klein, G., Murray, D.: Full-3d edge tracking with a particle filter. In: British Machine Vision Conference. BMVA, Edinburgh (2006)

  15. Lepetit, V., Vacchetti, L., Thalmann, D., Fua, P.: Fully automated and stable registration for augmented reality applications. In: IEEE Proceedings of the International Symposium on Mixed and Augmented Reality, pp. 93–102. IEEE, Washington, DC (2003)

  16. Montemerlo, M., Thrun, S., Koller, D., Wegbreit, B.: FastSLAM 2.0: an improved particle filtering algorithm for simultaneous localization and mapping. In: International Joint Conference on Artificial Intelligence. Acapulco, Mexico (2003)

  17. Neubert, J., Pretlove, J., Drummond, T.: Automatic generation of appearance-based edge models from image sequences. In: IEEE Proc. of the International Symposium on Mixed and Augmented Reality, pp. 79–87. IEEE, Nara, Japan (2007). http://www.und.nodak.edu/instruct/jneubert/papers/neubert_ismar07.pdf

  18. Pan, Q., Reitmayr, G., Drummond, T.: Interactive model reconstruction with user guidance. In: IEEE Proceedings of the International Symposium on Mixed and Augmented Reality, pp. 209–210. IEEE Computer Society, Washington, DC (2009). doi:10.1109/ISMAR.2009.5336460

  19. Reitmayr, G., Drummond, T.: Going out: Robust model-based tracking for outdoor augmented reality. In: IEEE Proceedings of the International Symposium on Mixed and Augmented Reality. IEEE, Santa Barbara (2006)

  20. Reitmayr, G., Eade, E., Drummond, T.: Semi-automatic annotations for remote collaboration. In: IEEE Proceedings of the International Symposium on Mixed and Augmented Reality. IEEE, Nara, Japan (2007)

  21. Rosten, E., Drummond, T.: Fusing points and lines for high performance tracking. In: International Conference on Computer Vision, pp. 1508–1515. IEEE, Washington, DC (2005)

  22. Rosten, E., Drummond, T.: Machine learning for high-speed corner detection. In: Proceedings of the European Conference on Computer Vision, vol. 1, pp. 430–443 (2006). http://mi.eng.cam.ac.uk/~er258/work/rosten_2006_machine.pdf

  23. Thormählen, T., Broszio, H., Weissenfeld, A.: Keyframe selection for camera motion and structure estimation from multiple views. In: Proceedings of the European Conference on Computer Vision, vol. 127, pp. 523–535 (2004). ftp://tnt.uni-hannover.de/pub/papers/2004/ECCV2004-TTHBAW.pdf

  24. Vacchetti, L., Lepetit, V., Fua, P.: Combining edge and texture information for real-time accurate 3D camera tracking. In: IEEE Proceedings of the International Symposium on Mixed and Augmented Reality, pp. 48–57. IEEE, Washington, DC, USA (2004)

  25. Vacchetti L., Lepetit V., Ponder M., Papagiannakis G., Thalmann D., Magnenat-Thalmann N., Fua P.: Stable Real-Time AR Framework for Training and Planning in Industrial Environments. In: Ong, S., Nee, A. (eds) Virtual and Augmented Reality Applications in Manufacturing, pp. 129–146. Springer, Berlin (2004)

    Google Scholar 

  26. Ventura, J., Hollerer, T.: Online environment model estimation for augmented reality. In: IEEE Proceedings of the International Symposium on Mixed and Augmented Reality, pp. 103–106. IEEE (2009)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jeremiah Neubert.

Additional information

Thanks to ABB for their generous support of this work.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Neubert, J., Pretlove, J. & Drummond, T. Rapidly constructed appearance models for tracking in augmented reality applications. Machine Vision and Applications 23, 843–856 (2012). https://doi.org/10.1007/s00138-011-0382-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-011-0382-4

Keywords

Navigation