Skip to main content
Log in

A variational approach to multi-sensor fusion of images

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Past research into multi-modality sensor data fusion has given rise to approaches that are generally heuristic and ad hoc. In this paper we utilize the calculus of variations as the underlying framework for fusing registered images of different modalities when models relating these modalities are available. The result is a mathematically rigorous method for improving the accuracy with which parameters can be estimated. Using both dense and sparse simulated range and intensity data, the proposed approach is demonstrated on the problem of estimating the surface representing the three dimensional structure of a scene. The results indicate that a four to five-fold increase in surface estimation accuracy with respect to the original input data can be realized. Furthermore, an 8%–250% increase in accuracy over surface estimation from each sensing modality alone (i.e., via shape from shading or surface reconstruction) can be realized.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. J.J. Clark and A.L. Yuille,Data Fusion for Sensory Information Processing Systems, Kluwer, Boston, MA, 1990.

    Google Scholar 

  2. N. Nandhakumar and J. Aggarwal, “Multisensory computer vision,” inAdvances in Computers, Vol. 34, M. Yovits (ed.), Academic Press, San Diego, CA, 1992.

    Google Scholar 

  3. A. Blake, A. Zisserman, and G. Knowles, “Surface descriptions from stereo and shading,” inImage and Vision Computing, 3(4), pp. 183–191, 1985.

    Google Scholar 

  4. M.J. Magee, B.A. Boyter, C.H. Chien, and J.K. Aggarwal, “Experiments in intensity guided range sensing recognition of three-dimensional objects,” inIEEE Trans. Pattern Analysis and Machine Intelligence, 7(6), pp. 629–637, 1985.

    Google Scholar 

  5. H. Pien, “Achieving safe autonomous landings on Mars using vision-based approaches,” inProceedings of the SPIE Conference on Cooperative Intelligent Robotics in Space, 1991.

  6. K. Ikeuchi and K. Sato, “Determining reflective properties of an object using range and brightness images,” inIEEE Trans. Pattern Analysis and Machine Intelligence, 13(11), pp. 1139–1153, Nov. 1991.

    Google Scholar 

  7. D. Nitzan, A. Brain, and R. Duda, “The Measurement and Use of Registered Reflectance and Range Data in Scene Analysis,” inProc. of the IEEE, 65(2), February 1977.

  8. R. Duda, D. Nitzan, and P. Barrett, “Use of range and reflectance data to find planar surface regions,” inIEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-1(3), July 1979.

  9. B. Gil, A. Mitiche, and J.K. Aggarwal, “Experiments in combining intensity and range edge maps,” inCVGIP, 21(3), pp. 395–441, 3/1983.

    Google Scholar 

  10. I.S. Kweon, M. Hebert, and T. Kanade, “Sensor fusion of range and reflectance data for outdoor scene analysis,” in2nd Annual Workshop on Space Operations, Automation, and Robotics, NASA, Washington, DC, 1988.

    Google Scholar 

  11. C.J. Delcroix and M.A. Abidi, “Fusion of range and intensity edge maps,” inSPIE Proc. on Sensor Fusion: Spatial Reasoning and Scene Interpretation, Vol. 1003, SPIE, pp. 145–152, 1988.

  12. T.A. Mancini and L.B. Wolff, “3D shape and source location from depth and reflectance,” inSPIE Proc. Optics, Illumination, and Image Sensing for Machine Vision VI, Vol. 1614, 1991.

  13. Y.F. Wang and D.I. Cheng, “Three dimensional shape construction and recognition by fusing intensity and structured lighting,” inPattern Recognition, 25(12), pp. 1411–1425, 1992.

    Google Scholar 

  14. J.E. Cryer, P.S. Tsai, and M. Shah, “Combining shape from shading and stereo using human vision model,” Department of Computer Science Technical Report CS-TR-92-25, University of Central Florida, Orlando, FL, 1992.

    Google Scholar 

  15. E.B. Gamble, D. Geiger, T. Poggio, and D. Weinshall, “Integration of vision modules and labeling of surface discontinuities,” inIEEE Trans. Systems, Man, Cybernetics, 19(6), pp. 1156–1161, November/December 1989.

    Google Scholar 

  16. T. Poggio, J. Little, W. Gillett, D. Geiger, D. Wienshall, M. Villalba, N. Larson, T. Cass, H. Bulhoff, M. Drumheller, P. Oppenheimer, W. Yang, and A. Hurlbert, “The MIT Vision Machine,” inProc. DARPA Image Understanding Workshop, Morgan Kaufmann, San Mateo, CA, April 1988.

    Google Scholar 

  17. A. Blake, “Comparison of the efficiency of deterministic and stochastic algorithms for visual reconstruction,” inIEEE Trans. on Pattern Analysis and Machine Intelligence, 11(11), pp. 2–12, 1989.

    Google Scholar 

  18. A.A. Amini, T.E. Weymouth, and R.C. Jain, “Using dynamic programming for solving variational problems in vision,” inIEEE Trans. on Pattern Analysis and Machine Intelligence, 12(9), pp. 855–867, 1990.

    Google Scholar 

  19. M. Kass, A. Witkins, and D. Terzopoulos, “Snakes: active contour models,” inProc. First Int'l Conf. on Computer Vision, London, 1987, pp. 259–268, 1987.

  20. L.D. Cohen, “On active contour models and balloons,” inComputer Vision, Graphics, and Image Understanding: Image Understanding, Vol. 53, pp. 211–218, March 1991.

    Google Scholar 

  21. J.M. Gauch and M. Seaidoun, “Comparison of implementation strategies for deformable surfaces in computer vision,” inProc. SPIE Conf. Mathematical Methods in Medical Imaging, Vol. 1768, San Diego, CA, July 1992.

  22. F. Leymarie and M.D. Levine, “Tracking deformable objects in the plane using an active contour model,” inIEEE. Trans. Pattern Analysis and Machine Intelligence, 15(6), pp. 617–634, June 1993.

    Google Scholar 

  23. W.E.L. Grimson,From Images to Surfaces, MIT Press, Cambridge, MA, 1981.

    Google Scholar 

  24. D. Terzopoulos, “Regularization of inverse visual problems involving discontinuities,” inIEEE Trans. on Pattern Analysis and Machine Intelligence, 8(4), July 1986.

  25. R.M. Bolle and B.C. Vemuri, “On three dimensional surface reconstruction methods,” inIEEE Trans. on Pattern Analysis and Machine Intelligence, 13(1), pp. 2–13, 1991.

    Google Scholar 

  26. A. Blake and A. Zisserman,Visual Reconstruction, MIT Press, Cambridge, MA, 1987.

    Google Scholar 

  27. S. Geman and D. Geman, “Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images,” inIEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-6(6), Nov. 1984.

  28. D. Mumford and J. Shah, “Boundary Detection by Minimizing Functionals, I,” inProc. of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 1985.

  29. B.K.P. Horn and M.J. Brooks (ed.),Shape from Shading, MIT Press, Cambridge, MA, 1989.

    Google Scholar 

  30. N.E. Hurt, “Mathematical methods in shape-from-shading: a review of recent results,” inActa Applicandae Mathematicae, 23, pp. 163–188, 1991.

    Google Scholar 

  31. B.K.P. Horn, “Shape from shading: a method for obtaining the shape of a smooth opaque object from one view,” Ph.D. Thesis, Dept. of Electrical Engineering, MIT-AI-TR-232, 1970.

  32. B.K.P. Horn, “Obtaining shape from shading information,” inThe Psychology of Computer Vision, P.H. Winston (ed.), McGraw-Hill, NY, pp. 115–155, 1975.

    Google Scholar 

  33. K. Ikeuchi and B.K.P. Horn, “Numerical shape from shading and occluding boundaries,” inArtificial Intelligence, 17(1–3), pp. 141–184, 1981.

    Google Scholar 

  34. T. Strat, “A numerical method for shape from shading for a single image,” S.M. Thesis, MIT EECS, MIT, Cambridge, MA, 1979.

    Google Scholar 

  35. B.K.P. Horn and M.J. Brooks, “The variational approach to shape from shading,” inComputer Vision, Graphics, and Image Processing, 33(2), pp. 174–208, 1986.

    Google Scholar 

  36. C.H. Lee and A. Rosenfeld, “Improved methods of estimating shape from shading using the light source coordinate system,” in Horn and Brooks (ed.),Shape from Shading, MIT Press, Cambridge, MA, revised version ofArtificial Intelligence, 1985, 26(2), 1989.

    Google Scholar 

  37. B.K.P. Horn, “Height and gradient from shading,” inInt'l Journal of Computer Vision, 5(1), pp. 37–75, 1990.

    Google Scholar 

  38. A.P. Pentland, “Local shading analysis,” inIEEE Trans. on Pattern Analysis and Machine Intelligence, 6(2), pp. 170–187, 1984. Also appeared in Horn and Brooks [1989] in a revised form.

    Google Scholar 

  39. A.P. Pentland, “Shape information from shading: a theory about human perception,” inProc. 2nd Int'l Conf. on Computer Vision, IEEE, Los Alamitos, CA, 1988.

    Google Scholar 

  40. M. Bichsel and A.P. Pentland, “A simple algorithm for shape from shading,” inProc. Computer Vision and Pattern Recognition 1992 Conference, IEEE, Los Alamitos, CA, 1992.

    Google Scholar 

  41. P. Dupuis and J. Oliensis, “Direct method for reconstructing shape from shading,” inProc. Computer Vision and Pattern Recognition 1992 Conference, IEEE, Los Alamitos, CA, 1992.

    Google Scholar 

  42. B.V.H. Saxberg, “A modern differential geometric approach to shape from shading,” Ph.D. Thesis, MIT EECS, 1989.

  43. T. Poggio, V. Torre, and C. Koch, “Computational vision and regularization theory,” inNature, 317, 26 September 1985.

  44. M. Bertero, T. Poggio, and V. Torre, “Ill-Posed Problems in Early Vision,” inProceedings of the IEEE, 76(8), 1988.

  45. J. Oliensis, “Uniqueness in shape from shading” inInt'l J. of Computer Vision, 6(2), pp. 75–104, 1991a.

    Google Scholar 

  46. R.T. Frankot and R. Chellappa, “A method for enforcing integrability in shape from shading algorithms,” inIEEE Trans. on Pattern Analysis and Machine Intelligence, 10(4), pp. 439–451, (1988).

    Google Scholar 

  47. D. Forsyth and A. Zisserman, “Reflections on shading,” inIEEE Trans. on Pattern Analysis and Machine Intelligence, 13(7), pp. 671–679, 1991.

    Google Scholar 

  48. Q. Zheng and R. Chellappa, “Estimation of illuminant direction, albedo, and shape from shading,” inIEEE PAMI, 13(7), pp. 671–679, 1991.

    Google Scholar 

  49. A.P. Pentland, “Finding the illuminant direction,” inOptical Society of America, 7294, pp. 448–455, 1982.

    Google Scholar 

  50. M.J. Brooks and B.K.P. Horn, “Shape and source from shading,” inProc. 1985 Int'l Joint Conf. Artificial Intelligence, Morgan Kaufmann, San Mateo, CA, pp. 932–936, 1985.

    Google Scholar 

  51. T. Simchony, R. Chellappa, and M. Shao, “Direct analytical methods for solving Poisson equations in computer vision problems,” inIEEE Trans. on Pattern Analysis and Machine Intelligence, 12(5), pp. 435–446, 1990.

    Google Scholar 

  52. E. Ruoy and A. Tourin, “A viscosity solutions approach to shape-from-shading,” inSIAM J. Numerical Analysis, 29(3), pp. 867–884, 1992.

    Google Scholar 

  53. M.A. Penna, “A shape from snading analysis for a single perspective image of a polyhedron,” inIEEE Trans. on Pattern Analysis and Machine Intelligence, 11(6), pp. 545–554, 1989.

    Google Scholar 

  54. J. Malik and D. Maydan, “Recovering three-dimensional shape from a single image of curved objects,” inIEEE Trans. on Pattern Analysis and Machine Intelligence, 11(6), pp. 555–566, 1989.

    Google Scholar 

  55. R. Courant and D. Hilbert,Methods of Mathematical Physics, Interscience, NY, 1953.

    Google Scholar 

  56. D. Weinshall, “The shape of shading,” MIT Artificial Intelligence Laboratory Memo No. 1264, Cambridge, MA, October 1990.

  57. J. Oliensis, “Shape from shading as a partially well-constrained problem,” inCVGIP: Image Processing, 54(2), pp. 163–183, 1991b.

    Google Scholar 

  58. B.K.P. Horn,Robot Vision, MIT Press, Cambridge, MA, 1986.

    Google Scholar 

  59. A.P. Pentland, “Linear shape from shading,” inInt. J. of Computer Vision, 4(2), pp. 153–162, 1990.

    Google Scholar 

  60. G. Wahba, “Practical approximate solutions to linear operator equations when the data are noisy,” inSIAM J. Numerical Analysis, 14(4), pp. 651–667, September 1978.

    Google Scholar 

  61. G. Wahba, “Ill-posed problems: numerical and statistical methods for mildly, moderately, and severely ill-posed problems with noisy data,” University of Wisconsin Department of Statistics Technical Report No. 595, Madison, WI, 1980.

  62. G. Wahba,Spline Models for Observational Data, SIAM, Philadelphia, PA, 1990.

    Google Scholar 

  63. B. Shahraray and D.J. Anderson, “Optimal estimation of contour properties by cross-validated regularization,” inIEEE Trans. Pattern Analysis and Machine Intelligence, 11(6), pp. 600–610, June 1989.

    Google Scholar 

  64. S.J. Reeves and R.M. Mersereau, “Automatic assessment of constraint sets in image restoration,” inIEEE Trans. Image Processing, 1(1), pp. 119–123, January 1992.

    Google Scholar 

  65. A. Rosenfeld and M. Thurston, “Edge and curve detection for visual scene analysis,” inIEEE Trans. Computers, Vol. C-20, 1971.

  66. A.M. Thompson, J.C. Brown, J.W. Kay, and D.M. Titterington, “A study of methods of choosing the smoothing parameter in image restoration by regularization,” inIEEE Trans. on Pattern Analysis and Machine Intelligence, 13(4), pp. 326–339, April 1991.

    Google Scholar 

  67. P. Hall and I. Koch, “On the feasibility of cross-validation in image analysis,” inSIAM J. Applied Math, 52(1), pp. 292–313, February 1992.

    Google Scholar 

  68. H. Pien,A variational framework for multi-sensor data fusion, Northeastern University College of Computer Science Ph.D. Dissertation, September 1993.

  69. L. Ambrosio and V.M. Tortorelli, “Approximation of functionals depending on jumps by elleptic functionals via Γ-convergence,” inComm. Pure and Applied Mathematics, Vol. 43, pp. 999–36, 1990.

    Google Scholar 

  70. L. Uhr, “Layered ‘recognition cone’ networks that preprocess,” inIEEE Trans. Computers, Vol. C-21, 1972.

  71. S. Tanimoto and T. Pavlidis, “A hierarchical data structure for picture processing,” inComputer Graphics and Image Processing, Vol. 4, 1975.

  72. D. Terzopoulos, “Image analysis using multigrid relaxation methods,” inIEEE Trans. on Pattern Analysis and Machine Intelligence, 8(2), March 1986.

  73. M.R. Leuttgen, W.C. Karl, A.S. Willsky, and R.R. Tenney, “Multiscale representations of Markov random fields,” MIT Laboratory for Information and Decision Systems Report No. LIDS-P-2157, December 1992.

  74. M.R. Leuttgen, W.C. Karl, and A.S. Willsky, “Efficient multiscale regularization with applications to the computation of optical flow,” Center for Intelligent Control Systems Technical Report No. CICS-P-370, April 1993.

  75. H. Pien and J. Gauch, “A variational approach to sensor fusion using registered range and intensity data,” inSPIE Conf. on Sensor Fusion and Aerospace Applications, Vol. 1956, April 1993.

Download references

Author information

Authors and Affiliations

Authors

Additional information

H. Pien is supported by Draper Laboratory under IR&D No. 451; J. Gauch is partially supported by the National Science Foundation under Grant IRI-9109431.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Pien, H.H., Gauch, J.M. A variational approach to multi-sensor fusion of images. Appl Intell 5, 217–235 (1995). https://doi.org/10.1007/BF00872223

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF00872223

Keywords

Navigation