Skip to main content
Log in

Experiments in active vision with real and virtual robot heads

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

In the emerging paradigm of animate vision, the visual processes are not thought of as being independent of cognitive or motor processing, but as an integrated system within the context of visual behavior. Intimate coupling of sensory and motor systems have found to improve significantly the performance of behavior based vision systems. In order to study active vision systems one requires sensory-motor systems. Designing, building, and operating such a test bed is a challenging task. In this paper we describe the status of on-going work in developing a sensory-motor robotic system, R2H, with ten degrees of freedoms (DOF) for research in active vision. To complement the R2H system a Graphical Simulation and Animation (GSA) environment is also developed. The objective of building the GSA system is to create a comprehensive design tool to design and study the behavior of active systems and their interactions with the environment. GSA system aids the researchers to develop high performance and reliable software and hardware in a most effective manner. The GSA environment integrates sensing and motor actions and features complete kinematic simulation of the R2H system, it's sensors and it's workspace. With the aid of the GSA environment a Depth from Focus (DFF), Depth from Vergence, and Depth from Stereo modules are implemented and tested. The power and usefulness of the GSA system as a research tool is demonstrated by acquiring and analyzing images in the real and virtual worlds using the same software implemented and tested in the virtual world.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. D.H. Ballard, “Animate Vision,”Artificial Intelligence, Vol. 48, pp. 57–86, 1991.

    Article  Google Scholar 

  2. J. Aloimonos, I. Weiss, and A. Bandyopadhyay, “Active Vision,”International Journal of Computer Vision, Vol. 1, No. 4, pp. 333–356, 1988.

    Google Scholar 

  3. J. Aloimonos, “Purposive and Qualitative Vision,” inProceedings of the IEEE International Conference on Pattern Recognition, pp. 346–360, 1990.

  4. R. Bajcsy, “Active Perception,”Proceedings of the IEEE, Vol. 76, pp. 996–1005, Aug. 1988.

    Google Scholar 

  5. C. Brown, “Gaze Controls with Interactions and Delays”,IEEE Transactions on Systems, Man, and Cybernetics, Vol. 20, pp. 518–527, March/April 1990.

    Google Scholar 

  6. A.L. Yarbus,Eye Movements and Vision, New York, NY: Plenum, 1967.

    Google Scholar 

  7. D. Noton and L. Stark, “Eye Movements and Visual Perception”,Scientific American, Vol. 224, pp. 35–43, June 1971.

    Google Scholar 

  8. D.H. Ballard and A. Ozcandarli, “Eye Fixation and Early Vision: Kinetic Depth,” inProceedings of the Second International Conference on Computer Vision, pp. 524–531, Dec. 1988.

  9. A. Berthoz and G.M. Jones,Adaptive Mechanisms in Gaze Control: Facts and Theories. New York, NY: Elsevier, 1985.

    Google Scholar 

  10. A.L. Abbott, “Selective Fixation Control for Machine Vision: A Survey,” inProceedings of the International Conference on Systems, Man, and Cybernetics, (Charlottsville, VA), pp. 1–6, Oct. 1991.

  11. J.J. Clark and N. Ferrier, “Model Control of an Attentive Vision System,” inProceedings of the Second International Conference on Computer Vision, (Tampa, FL), pp. 514–523, Dec. 1988.

  12. M. Mishkin, L.G. Ungerleider, and K.A. Macko, “Object Vision and Spatial Vision: Two Cortical Pathways”,Trends in Neuroscience, Vol. 6, pp. 414–417, 1983.

    Google Scholar 

  13. P.J. Burt, “Smart Sensing within a Pyramid Vision Machine,”Proceedings of the IEEE, Vol. 76, pp. 1006–1015, Aug. 1988.

    Google Scholar 

  14. A. Califano, R. Kjeldsen, and R.M. Bolle, “Data and Model Driven Foveation,” inProceedings of the 10th International Conference on Pattern Recognition, pp. 1–7, June 1990.

  15. D. Ballard, “Reference Frames for Animate Vision,” inProceedings of the Eleventh International Joint Conference on Artificial Intelligence, Vol. 2, (Detroit, MI), pp. 1635–1641, Aug. 1989.

  16. M. Swain and M. Stricker, “Promising Directions in Active Vision,” Tech. Rep. CS 91-27, University of Chicago, University of Chicago, IL, Nov. 1991. Written by the attendees of the NSF Active Vision Workshop.

  17. J. Herve and Y. Aloimonos, “Exploratory Active Vision: Theory,” inProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (Champaign, IL), pp. 10–15, June 1992.

  18. K.N. Kutulakos and C.R. Dyer, “Recovering Shape by Purposive Viewpoint Adjustment,” inProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (Champaign, IL), pp. 16–22, June 1992.

  19. T.J. Olson and R.D. Potter, “Real-Time Vergence Control,” inProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 404–409, June 1989.

  20. T.J. Olson, “Stereopsis for Fixating Systems,” inProceedings of the International Conference on Systems, Man, and Cybernetics, Charlottsville, VA), pp. 7–12, Oct. 1991.

  21. A.L. Abbot and N. Ahuja, “Surface Reconstruction by Dynamic Integration of Focus, Camera Vergence, and Stereo,” inProceedings of the Second International Conference on Computer Vision, (Tampa, FL), pp. 532–543, Dec. 1988.

  22. E.P. Krotkov,Active Computer Vision by Cooperative Focus and Stereo. New York, NY: Springer-Verlag, 1989.

    Google Scholar 

  23. R. Sharma and Y. Aloimonos, “Visual Motion Analysis Under Interceptive Behavior,” inProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (Champaign, IL), pp. 148–153, June 1992.

  24. D. Coombs and C. Brown, “Real-Time Smooth Pursuit Tracking for a Moving Binocular Robot,” inProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (Champaign, IL), pp. 23–28, June 1992.

  25. J. Clark, “Active Photometric Stereo,” inProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (Champaign, IL), pp. 29–34, June 1992.

  26. L.E. Wixson and D.H. Ballard, “Color Histograms for Real-time Object Search,” inProceedings of the SPIE Sensor Fusion II: Human and Machine Strategies Workshop, (Philadelphia, PA), Nov. 1989.

  27. D. Wilkes and J.K. Tsotsos, “Active Object Recognition,” inProceedings of the IEEE Computer Society Conference on Vision and Pattern Recognition, (Champaign, IL), pp. 136–141, June 1992.

  28. E. Krotkov, F. Fuma, and J. Summers, “An Agile Camera System for Flexible Image Acquisition,”IEEE Journal of Robotics and Automation, Vol. 4, pp. 108–113, Feb. 1988.

    Google Scholar 

  29. K. Pahlavan and J. Eklundh, “Heads, Eyes, and Head-Eye Systems,” inProceedings of the SPIE Conference on Applications of Artificial Intelligence X: Machine Vision and Robotics, Vol. 1708, (Orlando, FL), pp. 14–25, Apr. 1992.

  30. N.J. Ferrier, “The Harvard Binocular Head,” inProceedings of the SPIE Conference on Applications of Artificial Intelligence X: Machine Vision and Robotics, Vol. 1708, (Orlando, FL), pp. 2–13, Apr. 1992.

  31. A.L. Abbott and N. Ahuja, “The University of Illinois Active Vision System,” Tech. Rep. CV-92-5-1, Beckman Institute, University of Illinois, Urbana, IL, 1992.

    Google Scholar 

  32. M. Jenkin, E. Milios, and J. Tsotsos, “TRISH: The Toronto-IRIS Stereo Head,” inProceedings of the SPIE Conference on Applications of Artificial Intelligence X: Machine Vision and Robotics, Vol. 1708, (Orlando, FL), pp. 36–46, Apr. 1992.

  33. H.I. Christensen, “The AUC Robot Camera Head,” inProceedings of the SPIE Conference on Applications of Artificial Intelligence X: Machine Vision and Robotics, Vol. 1708, (Orlando, FL) pp. 26–33, Apr. 1992.

  34. T.B. Sheridan, “Telerobotics,”Automatica, Vol. 25, No. 4, pp. 487–507, 1989.

    Google Scholar 

  35. W. Kim, F. Tendick, and L. Stark, “Visual Enhancement in Pick-and-Place Tasks: Human Operators Controlling a Simulated Cylindrical Manipulator,”IEEE Journal of Robotics and Automation, Vol. 3, No. 5, pp. 418–425, 1987.

    Google Scholar 

  36. I. Ince, K. Bryan, and T. Brooks, “Virtuality and Reality: A Video/Graphics Environment for Teleoperation,” inProceedings of the International Conference on Systems, Man, and Cybernetics, (Charlottsville, VA), pp. 1083–1089, Oct. 1991.

  37. C. Chen, M.M. Trivedi, C.R. Bidlack, and T. Lassiter, “An Environment for Simulation and Animation of Sensor-based Robotics,” inProceedings of the SPIE Conference on Applications of Artificial Intelligence IX, (Orlando, FL), pp. 336–354, Apr. 1991.

  38. C. Chen, M.M. Trivedi, and C.R. Bidlack, “Simulation and Graphical Interface for Programming and Operation of Sensor-based Robots,” inProceedings of the IEEE International Conference on Robotics and Automation, (Nice, France), May 1992.

  39. D. Rocheleau and C. Crane, “Development of a Graphical Interface for Robotic Operation in a Hazardous Environment,” inProceedings of the International Conference on Systems, Man, and Cybernetics, (Charlottsville, VA), pp. 1077–1081, Oct. 1991.

  40. M. Subbarao, and M. Lu, “Computer Modeling and Simulation of Camera Defocus,” Tech. Rep. 92.01.16, State University of New York at Stony Brook, Stony Brook, NY, Jan. 1992.

    Google Scholar 

  41. M. Subbarao and T. Wei, “Depth from Defocus and Rapid Autofocusing: A Practical Approach,” Tech. Rep. 92.01.17, State University of New York at Stony Brook, Stony Brook, NY, Jan. 1992.

    Google Scholar 

  42. K.D. Skifstad,High Speed Range Estimation Based on Intensity Gradient Analysis, New York, NY: Springer-Verlag, 1991.

    Google Scholar 

  43. S. Das and N. Ahuja, “Performance Analysis of Stereo, Vergence, and Focus as Depth Cues for Active Vision,” Tech. Rep. CV-92-6-1, Beckman Institute, University of Illinois, Urbana, IL.

  44. D. Geiger and A. Yuille, “Stereopsis and Eye-Movement,” inProceedings of the IEEE First International Conference on Computer Vision, (London, England), pp. 306–314, June 1987.

  45. E. Grosso, G. Sandini, and M. Tistarelli, “3-D Object Reconstruction Using Stereo and Motion,”IEEE Transactions on Systems, Man, and Cybernetics, Vol. 19, pp. 1465–1476, Nov./Dec. 1989. Special issue on Computer Vision.

    Google Scholar 

  46. S.B. Marapane and M.M. Trivedi, “Multi-Primitive Hierarchial (MPH) Stereo System,” inProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (Champaign, IL), pp. 499–505, June 1992.

  47. S.B. Marapane and M.M. Trivedi, “Multi-Primitive Hierarchical Stereo Analysis,”IEEE Transactions on Pattern Analysis and Machine Intelligence, 1993. To appear.

Download references

Author information

Authors and Affiliations

Authors

Additional information

This research was supported by the U.S. Department of Energy under the DOE's University Program in Robotics for Advanced Reactors (Universities of Florida, Michigan, Tennessee, Texas, and the Oak Ridge National Laboratory) under Contract No. DOE DE-FG02-86NE37968.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Marapane, S.B., Trivedi, M.M. Experiments in active vision with real and virtual robot heads. Appl Intell 5, 237–250 (1995). https://doi.org/10.1007/BF00872224

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF00872224

Keywords

Navigation