Skip to main content

Advertisement

Log in

A discrete event framework for autonomous observation under uncertainty

  • Published:
Journal of Intelligent and Robotic Systems Aims and scope Submit manuscript

Abstract

In this work we establish a framework for the general problem of observation, which may be applied to different kinds of visual tasks. We construct ‘intelligent’ high-level control mechanisms for active visual recognition of different processes within a hybrid dynamic system. We address the problem of observing a manipulation process in order to illustrate the ideas and motive behind our framework. We use a discrete event dynamic system as a high-level structuring technique to model the manipulation system. The formulation utilizes the knowledge about the system and the different actions in order to solve the observer problem in an efficient, stable and practical manner. The model uses different tracking mechanisms so that the observer can ‘see’ the workspace of the manipulating robot. An automaton is developed for the hand/object interaction over time and a stabilizing observer is constructed. Low-level modules are developed for recognizing the visual ‘events’ that causes state transitions within the dynamic manipulation system in real time. A coarse quantization of the manipulation actions is used in order to attain an active, adaptive and goaldirected sensing mechanism. The formulation provides high-level symbolic interpretations of the scene under observation. The discrete event framework is augmented with mechanisms for recovering the continuous parametric evolution of the scene under observation and for asserting the state of the manipulation agent. This work examines closely the possibilities for errors, mistakes and uncertainties in the manipulation system, observer construction process and event identification mechanisms. We identify and suggest techniques for modeling these uncertainties. Ambiguities are allowed to develop and are resolved after finite time. Error recovery mechanisms are also devised. The computed uncertainties are utilized for navigating the observer automaton state space, asserting state transitions and developing a suitable tracking mechanism.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Abdou, I. E. and Pratt, W. K.: Quantitative design and evaluation of enhancement/thresholding edge detectors. Proc. IEEE 67 (5) (1979).

  2. Almeldin, T.: Visualization of 3D workspaces, PhD Thesis, Computer and Information Science Department, University of Pennsylvania, 1991.

  3. Aloimonos, J. and Bandyopadhyay, A.: Active vision, in Proc. 1st Int. Conf. on Computer Vision, 1987.

  4. Anandan, P.: A unified perspective on computational techniques for the measurement of visual motion, in Proc. 1st Int. Conf. on Computer Vision, 1987.

  5. Anderson, H. L.: GRASP lab. camera systems and their effects on algorithms, Technical Report MS-CIS-88-85 and GRASP Lab. TR 161, University of Pennsylvania, 1988.

  6. AnderssonM. and LundquistA.: Tracking lines in a stereo image sequence, Technical Report TRITA-NA-9116 and CVAP-87, Computational Vision and Active Perception Laboratory, Royal Institute of Technology, Stockholm, Sweden, 1991.

    Google Scholar 

  7. Bajcsy, R., Krotkov, E., and Mintz, M.: Models of errors and mistakes in machine perception, Technical Report MS-CIS-86-26 and GRASP Lab. TR 64, University of Pennsylvania, 1986.

  8. Bajcsy, R.: Active perception, Proc. IEEE 76(8) (1988).

  9. Bajcsy, R. and Sobh, T. M.: A framework for observing a manipulation process, Technical Report MS-CIS-90-34 and GRASP Lab. TR 216, University of Pennsylvania, June 1990.

  10. Bajcsy, R. and Sobh, T. M.: Observing a moving agent, Technical Report MS-CIS-91-01 and GRASP Lab. TR 247, Computer Science Dept., School of Engineering and Applied Science, University of Pennsylvania, January 1991.

  11. Ballard, D. H. and Ozcandarli, A.: Eye fixation and early vision: Kinetic depth, Technical Report, Computer Science Dept., University of Rochester, 1988.

  12. Bak, P. and Chen, K.: Self-organized criticality, Scientific American, January 1991.

  13. Barron, J. L., Jepson, A. D. and Tsotsos, J. K.: The feasibility of motion and structure from noisy time-varying image velocity information, Int. Computer Vision, December 1990.

  14. Benahmed, N. M.: Camera calibration for dynamic environment, M.S. Thesis, Department of Electrical Engineering, University of Pennsylvania, 1989.

  15. Benveniste, A. and Guernic, P. L.: Hybrid dynamical systems theory and the SIGNAL language, IEEE Trans. Automatic Control 35(5) (1990).

  16. Bergholm, F.: A theory on optical velocity fields and ambiguous motion of curves, in Proc. 2nd Int. Conf. Computer Vision, Florida, 1988.

  17. Bergholm, F.: Motion from flow along contours: A note on robustness and ambiguous cases, in Int. J. Computer Vision 3 (1988), 395–415.

    Google Scholar 

  18. Binford, T. O.: Generic surface interpretation: Observability model, Technical Report, Robotics Laboratory, Stanford University, 1991.

  19. Brave, Y. and Heymann, M.: Control of discrete event systems modeled as hierarchical state machines, Technical Report CIS-9012, Computer Science Department, TECHNION-Israel Institute of Technology, March 1991.

  20. Burt, P. J., Yen, C., and Xu, X.: Multiresolution flow-through motion analysis, in Proc. 1983 IEEE Conf. Computer Vision and Pattern Recognition.

  21. Burt, P. J., et al.: Object tracking with a moving camera, IEEE Workshop on Visual Motion, March 1989.

  22. Cameron, A. and Wu, H.: A framework for sensory planning, in Proc. Int. Conf. Automation, Robotics and Computer Vision (ICARCV'90), Singapore, September 1990.

  23. Cao, X.: The predictability of discrete event systems, in Proc. 27th Conf. on Decision and Control, December 1988.

  24. Carlsson, S. and Eklundh, J.: Object detection using model based prediction and motion parallax, in Proc. 1st European Conf. on Computer Vision, Antibes, France, April 1990.

  25. Chase, C., Serrano, J., and Ramadge, P.: Periodicity and chaos from switched flow systems: Contrasting examples of discretely controlled continuous system, Technical Report, Department of Electrical Engineering, Princeton University, January 1991.

  26. Chaochen, Z., Hoare, C. A. R., and Ravn, A. P.: A calculus of durations, Technical Report, Programming Research Group, Computing Laboratory, Oxford University, June 1991.

  27. Chaumette, F. and Rives, P.: Vision-based-control for robotic tasks, in Proc. IEEE Int. Workshop on Intelligent Motion Control, Vol. 2, August 1990, pp. 395–400

  28. Chung, J., Liu, J. W. S., and Lin, K.: Scheduling periodic jobs that allow imprecise results, IEEE Trans. on Computers 39(9) (1990).

  29. Clark, M. S.: Robot-based real-time motion tracker, in Proc. 2nd SPIE Conf. on Sensor Fusion, November 1989.

  30. Cooper, J. and Kitchen, L.: Multi-agent motion segmentation for real-time task directed vision, in Proc. 4th Australian Joint Conference on Artificial Intelligence, Perth, Australia, November 1990.

  31. Deutsch, E. S. and Fram, J. R.: A quantitative study of the orientation bias of some edge detector schemes, in IEEE Trans. Comput., C-27, 3, March 1978.

  32. Fischler, M. and Bolles, R. C.: Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography, Readings in Computer Vision, Morgan Kaufmann, 1987.

  33. Fram, J. R. and Deutsch, E. S.: On the quantitative evaluation of edge detection schemes and their comparison with human performance, in IEEE Trans. Comput., C-24, 6, June 1975.

  34. Grzywacz, N. M. and Hildreth, E. C.: The Incremental rigidity scheme for recovering structure from motion: Position vs. velocity based formulations, MIT A.I. Memo No. 845, October 1985.

  35. GuernicP. L., et al.: Programming real time applications with SIGNAL, Technical Report, IRISA, Rennes, France, 1990.

    Google Scholar 

  36. GuernicP. L. and GautierT.: Data-flow to von Neumann: the SIGNAL Approach, Technical Report 1229, INRIA, Rennes, France, May 1990.

    Google Scholar 

  37. Hager, G. D.: Active reduction of uncertainty in multi-sensor systems, PhD Thesis, Computer and Information Science Department, University of Pennsylvania, July 1988.

  38. HaralickR. M. and LeeJ. S. J.: Context dependent edge detection and evaluation, Pattern Recognition 23(1/2) (1990), 1–19.

    Google Scholar 

  39. Heeger, D. J.: Models for motion perception, PhD Thesis, Computer and Ination. Science Department, University of Pennsylvania, September 1987.

  40. Heel, J.: Dynamic Motion Vision, in Proc. SPIE Conf. on Computer Vision, November 1989.

  41. Hervé, J., Cucka, P., and Sharma, R.: Quantitative visual control of a robot manipulator, in Proc. DARPA Image Understanding Workshop, September 1990.

  42. Hervé, J., Sharma, R. and Cucka, P.: Qualitative coordination of a robot hand/eye system, CAR-TR-516, Center for Automation Research, University of Maryland, 1990.

  43. Heymann, M.: Concurrency and discrete event control, in Proc. IEEE Conf. on Decision and Control, December 1989.

  44. Ho, Y.: Performance evaluation and perturbation analysis of discrete event dynamic systems, in IEEE Trans. on Automatic Control, July 1987.

  45. Hopcroft, J. E. and Ullman, J. D.: Introducion to Automata Theory, Languages and Computation, Addison-Wesley, 1979.

  46. HornB. K. P. and SchunckB. G.: Determining optical flow, Artificial Intelligence 17 (1981), 185–203.

    Google Scholar 

  47. Horn, B. K. P.: Robot Vision, McGraw-Hill, 1987.

  48. Hsu, A. et al.: The design of Platoon maneuver protocols for IVHS, PATH Research Report UCB-ITS-PRR-91-6, Institute of Transportation Studies, University of California at Berkeley, April 1991.

  49. Inan, K. and Varaiya, P.: Finitely recursive process models for discrete event systems, IEEE Trans. on Automatic Control 33(7) (1988).

  50. Izaguiree, A., Pu, P., and Summers, J.: A new development in camera calibration: Calibrating a pair of mobile cameras, in Proc. Int. Conf. on Robotics and Automation, 1985, pp. 74–79.

  51. Karaaslan, U., Varaiya, P., and Walrand, J.: Two proposals to improve freeway traffic flow, PATH Research Report UCB-ITS-PRR-90-6, Institute of Transportation Studies, University of California at Berkeley, December 1990.

  52. Keren, D., Peleg, S., and Shmuel, A.: Accurate hierarchical estimation of optic flow, TR-89-9, Department of Computer Science, The Hebrew University of Jerusalem, June 1989.

  53. Kitchen, L. and Rosenfeld, A.: Edge evaluation using local edge coherence, in IEEE Trans. on Systems, Man, and Cybernetics, SMC-11, 9, September 1981.

  54. Kohavi, Z.: Switching and Finite Automata Theory, McGraw-Hill, 1979.

  55. Kohn, W. and Nerode, A.: An autonomous control theory, Personal correspondence.

  56. Krotkov, E.: Results in finding edges and corners in images using the first directional derivative, Technical Report MS-CIS-85-14 and GRASP Lab. TR 37, University of Pennsylvania, 1985.

  57. Kuniyoshi, Y., Inaba, M., and Inoue, H.: Teaching by Showing: Generating Robot Programs by Visual Observation of Human Performance, Department of Mechanical Engineering, The University of Tokyo, Technical Report, 1990.

  58. Lavignon, J. and Shoham, Y.: Temporal Automata, Technical Report STAN-CS-90-1325, Department of Computer Science, Stanford University, August 1990.

  59. Lee, S. W. and Wohn, K.: Tracking moving objects by a robot-held camera using a pyramid-based image processor, Technical Report MS-CIS-88-97 and GRASP Lab. TR 168, University of Pennsylvania, 1988.

  60. Lewis, H. R. and Papadimitriou, C. H.: Elements of the Theory of Computation, Prentice-Hall, 1981.

  61. Li, Y. and Wonham, W. M.: Controllability and observability in the state-feedback control of discrete-event systems, Proc. 27th Conf. on Decision and Control, 1988.

  62. Lobo, N. and Tsotsos, J. K.: Shape information from image motion: What can we compute from three points?, Technical Report RBCV-TR-89-32, Department of Computer Science, University of Toronto, December 1989.

  63. Longuet-Higgins, H. C. and Prazdny, K.: The interpretation of a moving retinal image, in Proc. Royal Society of London B, 208, 385–397.

  64. Nerode, A. and Remmel, J. B.: A model for hybrid systems, Presented at the Hybrid Systems Workshop, Mathematical Sciences Institute, Cornell University, May 1991.

  65. Nielson, L. and Sparr, G.: Projective area-invariants as an extension of the cross-ratio, Technical Report, Lund Institute of Technology, Sweden.

  66. Osherson, D. N., Stob, M., and Weinstein, S.: Systems that Learn, MIT Press, 1986.

  67. Özveren, C. M. and Willsky, A. S.: Aggregation and multi-level control in discrete event dynamic systems, Technical Report CICS-P-199, Center for Intelligent Control Systems, Massachusetts Institute of Technology, March 1990.

  68. Özveren, C. M.: Analysis and control of discrete event dynamic systems: A state space approach, PhD Thesis, Massachusetts Institute of Technology, August 1989.

  69. PeliT. and MalahD., A study of edge detection algorithms, Computer Graphics and Image Processing 20 (1982), 1–21.

    Google Scholar 

  70. Ramadge, P. J. and Wonham, W. M.: Supervisory control of a class of discrete event processes, SIAM J. Control and Optimization, January 1987.

  71. Ramadge, P. J. and Wonham, W. M.: Modular feedback logic for discrete event systems, in SIAM J. Control and Optimization, September 1987.

  72. Ravn, A. P. and Rischel, H.: Requirements capture for embedded real-time systems, in IMACS Symposium MCTS, Vol. 2, France, March 1991, pp. 147–152.

  73. Révész, G. E.: Introduction to Formal Languages, McGraw-Hill, 1985.

  74. Schapire, R. E.: The design and analysis of efficient learning algorithms, PhD Thesis, Massachusetts Institute of Technology, February 1991.

  75. Sobh, T. M. and Bajcsy, R.: A model for observing a moving agent, in Proc. 4th Int. Workshop on Intelligent Robots and Systems (IROS'91), Osaka, Japan, November 1991.

  76. Sobh, T. M. and Bajcsy, R.: Visual observation of a moving agent, in Proc. European Robotics and Intelligent Systems Conference (EURISCON'91), Corfu, Greece, June 1991 and presented at the 12th Int. Joint Conference on Artificial Intelligence (IJ-CAI), Workshop on Dynamic Scene Understanding Sydney, Australia, August 1991.

  77. Sobh, T. M.: A framework for visual observation, Technical Report MS-CIS-91-36 and GRASP Lab. TR 261, Computer Science Dept., School of Engineering and Applied Science, University of Pennsylvania, May 1991.

  78. Sobh, T. M. and Wohn, K.: Recovery of 3D motion and structure by temporal fusion, in Proc. 2nd SPIE Conference on Sensor Fusion, November 1989.

  79. SparrG. and NielsenL.: Shape and mutual cross-ratios with applications to orientation problems, Technical Report, Lund Institute of Technology, Sweden, 1990.

    Google Scholar 

  80. Subbarao, M. and Waxman, A. M.: On the uniqueness of image flow solutions for planar surfaces in motion, CAR-TR-113, Center for Automation Research, University of Maryland, April 1985.

  81. Sull, S. and Ahuja, N.: Segmentation, matching, and estimation of structure and motion of textured piecewise planar surfaces, in Proc. IEEE Workshop on Visual Motion, October 1991.

  82. Suri, R.: Perturbation analysis: The state of the art and research issues explained via the GI/G/1 queue, Proc. IEEE (January 1989).

  83. Tomasi, C. and Kanade, T.: Factoring image sequences into shape and motion, in Proc. IEEE Workshop on Visual Motion, October 1991.

  84. Tomasi, C. and Kanade, T.: Shape and motion without depth, in CMU-CS-90-128, School of Computer Science, Carnegie Mellon University, May 1990.

  85. Tsai, R. Y. and Huang, T. S.: Estimating three-dimensional motion parameters of a rigid planar patch, in IEEE Trans. on Acoustics, Speech and Signal Processing, ASSP 29(6) (1981).

  86. Tsai, R. Y.: An efficient and accurate camera calibration technique for 3D machine vision, IBM Report.

  87. Ullman, S.: Analysis of visual motion by biology and computer systems, IEEE Computer (August 1981).

  88. Ullman, S.: Maximizing Rigidity: The incremental recovery of 3D structure from rigid and rubbery motion, AI Memo 721, MIT AI Lab., 1983.

  89. Varaiya, P. and Shladover, S. E.: Sketch of an IVHS Systems Architecture, PATH Research Report UCB-ITS-PRR-91-3, Institute of Transportation Studies, University of California at Berkeley, February 1991.

  90. Vaz, A. F. and Wonham, W. M.: On supervisor reduction in discrete-event systems, Technical Report, Systems Control Group, Department of Electrical Engineering, University of Toronto, 1985.

  91. WaxmanA. M. and WohnK.: Contour evolution, neighborhood deformation and global image flow: Planar surfaces in motion, Int. J. Robotics Research 4(3) (1985), 95–108.

    Google Scholar 

  92. Waxman, A. M. and Ullman, S.: Surface structure and 3D motion from image flow: a kinematic analysis, CAR-TR-24, Center for Automation Research, University of Maryland, October 1983.

  93. Weng, J., Huang, T. S. and Ahuja, N.: 3D motion estimation, understanding and prediction from noisy image sequences, IEEE Trans., PAMI 9(3) (1987).

  94. Willner, Y. and Heymann M.: On supervisory control of concurrent discrete-event systems, Technical Report CIS-9009, Computer Science Department, TECHNION — Israel Institute of Technology, October 1990.

  95. Wilson, R. and Granlund, G. H.: The uncertainty principle in image processing, IEEE Trans. PAMI 6(6) (1984).

  96. Wohn, K. and Maeng, S. R.: Real-time of estimation 2D motion for object tracking, in Proc. SPIE Conf. on Intelligent Robotics, November 1989.

  97. WuH. and CameronA.: A bayesian decision theoretic approach for adaptive goaldirected sensing, Technical Report, Philips Laboratories, New York, May 1990.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Sobh, T.M., Bajcsy, R. A discrete event framework for autonomous observation under uncertainty. J Intell Robot Syst 16, 315–385 (1996). https://doi.org/10.1007/BF00270449

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF00270449

Key words