Skip to main content
Log in

Detection and localization of specular surfaces using image motion cues

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

Successful identification of specularities in an image can be crucial for an artificial vision system when extracting the semantic content of an image or while interacting with the environment. We developed an algorithm that relies on scale and rotation invariant feature extraction techniques and uses motion cues to detect and localize specular surfaces. Appearance change in feature vectors is used to quantify the appearance distortion on specular surfaces, which has previously been shown to be a powerful indicator for specularity (Doerschner et al. in Curr Biol, 2011). The algorithm combines epipolar deviations (Swaminathan et al. in Lect Notes Comput Sci 2350:508–523, 2002) and appearance distortion, and succeeds in localizing specular objects in computer-rendered and real scenes, across a wide range of camera motions and speeds, object sizes and shapes, and performs well under image noise and blur conditions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

Notes

  1. But see [19] who use only minimal assumptions about the scene, motion and 3D shape.

  2. Also see [28] specular shape perception in human observers.

  3. When referring to matte or diffusely reflecting objects, we imply that these objects also have a 2D texture.

  4. It may be that bi-modality does not work well as a global parameter; however, at a particular spatial scale it may continue to correctly predict surface reflectance.

  5. At the fundamental matrix estimation stage, motion vectors from the entire image contribute.

  6. Note that the aim in [1] was to predict human perception. Thus this measure predicts apparent or perceived shininess not physical reflectance.

  7. SIFT features have also been used for sparse specular surface reconstruction [65].

  8. This is crucial since appearance distortion critically depends on the change in feature vectors.

  9. As discussed below: nonrigid and specular motion share similar features and may be confused by a classifier; see, for example [1].

  10. Precision-recall curves are obtained by varying a specific threshold parameter, analogous to ROC curves.

  11. Given the results in experiment 5.1, we did not expect differences in performance for rotation and zoom.

  12. Interestingly, it has been shown that such objects tend to be perceived as less shiny by human observers [71].

  13. We suggest below that by complementing our motion-based features with static cues to specularity, e.g., [19], also simple 3D specular shapes may be detected.

  14. In [1] images to be classified as matte or shiny contained only a single object and a black background.

  15. Compared to the video sequences in [1].

  16. Specular highlights have been suggested as robust features for matching between 2D images and object’s 3D representation for pose estimation [73]. This suggests that highlights may also be useful for specular object detection.

References

  1. Doerschner, K., Fleming, R., Yilmaz, O., Schrater, P., Hartung, B., Kersten, D.: Visual motion and the perception of surface material. Curr. Biol. 21(23), 2010–2016 (2011)

    Google Scholar 

  2. Swaminathan, R., Kang, S., Szeliski, R., Criminisi, A., Nayar, S.: On the motion and appearance of specularities in image sequences. Lect. Notes Comput. Sci. 2350, 508–523 (2002)

    Google Scholar 

  3. Horn, B.: Shape from shading: A method for obtaining the shape of a smooth opaque object from one view (1970)

  4. Horn, B.: Shape from Shading Information. McGraw-Hill, New York (1975)

  5. Koenderink, J., Van Doorn, A.: Photometric invariants related to solid shape. Optica Acta 27, 981–996 (1980)

    Article  MathSciNet  Google Scholar 

  6. Pentland, A.: Shape information from shading: a theory about human perception. Spatial Vis 4, 165–182 (1989)

    Article  Google Scholar 

  7. Ihrke, I., Kutulakos, K., Magnor, M., Heidrich, W.: EUROGRAPHICS 2008 STAR—state of the art report state of the art in transparent and specular Oobject reconstruction (2008)

  8. Wang, Z., Huang, X., Yang, R., Zhang, Y.: Measurement of mirror surfaces using specular reflection and analytical computation. Mach. Vis. Appl. 24, 289–304 (2013)

    Article  Google Scholar 

  9. Saint-Pierre, C.-A., Boisvert, J., Grimard, G., Cheriet, F.: Detection and correction of specular reflections for automatic surgical tool segmentation in thoracoscopic images. Mach. Vis. Appl. 22, 171–180 (2011)

    Article  Google Scholar 

  10. Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R., Kohli, P., Shotton, J., Hodges, S., Freeman, D., Davison, A., Fitzgibbon, A., Kinectfusion : Real-time 3d reconstruction and interaction using a moving depth camera. In: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, UIST ’11, ACM, New York, pp. 559–568 (2011)

  11. Newcombe, R.A., Davison, A.J., Izadi, S., Kohli, P., Hilliges, O., Shotton, J., Molyneaux, D., Hodges, S., Kim, D., Fitzgibbon, A.: Kinectfusion: real-time dense surface mapping and tracking. In: 2011 10th IEEE international symposium on Mixed and Augmented Reality (ISMAR), pp. 127–136

  12. Dörschner, K.: Image Motion and the Appearance of Objects. McGraw-Hill, New York (1975)

  13. Shafer, S.: Using color to separate reflection components. Color 10, 210–218 (1985)

    Article  Google Scholar 

  14. Wolff, L., Boult, T.: Constraining object features using a polarization reflectance model. IEEE Trans. Pattern Anal. Mach. Intell. 13, 635–657 (1991)

    Article  Google Scholar 

  15. Nayar, S., Fang, X., Boult, T.: Removal of specularities using color and polarization. In: 1993 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Proceedings CVPR’93, pp. 583–590 (1993)

  16. Oren, M., Nayar, S.: A theory of specular surface geometry. Int. J. Comput. Vis. 24, 105–124 (1997)

    Article  Google Scholar 

  17. Roth, S., Black, M.: Specular flow and the recovery of surface structure. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 2, IEEE, pp. 1869–1876 (2006)

  18. Nayar, S., Ikeuchi, K., Kanade, T.: Determining shape and reflectance of Lambertian, specular, and hybrid surfaces using extended sources. In: International Workshop on Industrial Applications of Machine Intelligence and Vision, IEEE, pp. 169–175

  19. DelPozo, A., Savarese, S.: Detecting specular surfaces on natural images. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition CVPR ’07, pp. 1–8

  20. Doerschner, K., Kersten, D., Schrater, P.: Rapid classification of specular and diffuse reflection from image velocities. Pattern Recognit. 44, 1874–1884 (2011)

    Article  Google Scholar 

  21. Ho, Y., Landy, M., Maloney, L.: How direction of illumination affects visually perceived surface roughness. J. Vis. 6, 8 (2006)

    Article  Google Scholar 

  22. Doerschner, K., Boyaci, H., Maloney, L.: Estimating the glossiness transfer function induced by illumination change and testing its transitivity. J. Vis. 10(4), 1–9 (2010)

    Google Scholar 

  23. Doerschner, K., Maloney, L., Boyaci, H.: Perceived glossiness in high dynamic range scenes. J. Vis. 10 (2010)

  24. te Pas, S., Pont, S.: A comparison of material and illumination discrimination performance for real rough, real smooth and computer generated smooth spheres. In: Proceedings of the 2nd symposium on Applied perception in graphics and visualization, ACM New York, USA, pp. 75–81

  25. Nishida, S., Shinya, M.: Use of image-based information in judgments of surface-reflectance properties. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 15, 2951–2965 (1998)

    Article  Google Scholar 

  26. Dror, R., Adelson, E., Willsky, A.: Estimating surface reflectance properties from images under unknown illumination. In: Human Vision and Electronic Imaging VI, SPIE Photonics West, pp. 231–242 (2001)

  27. Matusik, W., Pfister, H., Brand, M., McMillan, L.: A data-driven reflectance model. ACM Trans. Graph. 22, 759–769 (2003)

    Article  Google Scholar 

  28. Fleming, R., Torralba, A., Adelson, E.: Specular reflections and the perception of shape. J. Vis. 4, 798–820 (2004)

    Article  Google Scholar 

  29. Motoyoshi, I., Nishida, S., Sharan, L., Adelson, E.: Image statistics and the perception of surface qualities. Nature (London) 447, 206–209 (2007)

    Article  Google Scholar 

  30. Vangorp, P., Laurijssen, J., Dutré, P.: The influence of shape on the perception of material reflectance. ACM Trans. Graph. (TOG) 26, 77-es (2007)

    Article  Google Scholar 

  31. Olkkonen, M., Brainard, D.: Perceived glossiness and lightness under real-world illumination. J. Vis. 10, 5 (2010)

    Google Scholar 

  32. Kim, J., Anderson, B.: Image statistics and the perception of surface gloss and lightness. J. Vis. 10(9), 3 (2010)

    Google Scholar 

  33. Marlow, P., Kim, J., Anderson, B.: The role of brightness and orientation congruence in the perception of surface gloss. J. Vis. 11, 16 (2011)

    Google Scholar 

  34. Kim, J., Marlow, P., Anderson, B.: The perception of gloss depends on highlight congruence with surface shading. J. Vis. 11, 4 (2011)

    Google Scholar 

  35. Zaidi, Q.: Visual inferences of material changes: color as clue and distraction. Wiley Interdiscip. Rev. Cogn. Sci. 2(6), 686–700 (2011)

  36. te Pas, S., Pont, S., van der Kooij, K.: Both the complexity of illumination and the presence of surrounding objects influence the perception of gloss. J. Vis. 10, 450–450 (2010)

    Article  Google Scholar 

  37. Hartung, B., Kersten, D.: Distinguishing shiny from matte. J. Vis. 2, 551–551 (2002)

    Article  Google Scholar 

  38. Sakano, Y., Ando, H.: Effects of self-motion on gloss perception. Perception 37, 77 (2008)

    Google Scholar 

  39. Wendt, G., Faul, F., Ekroll, V., Mausfeld, R.: Disparity, motion, and color information improve gloss constancy performance. J. Vis. 10, 7 (2010)

    Google Scholar 

  40. Blake, A.: Specular stereo. In: Proceedings of the International Joint Conference on Artificial Intelligence, pp. 973–976

  41. Weyrich, T., Lawrence, J., Lensch, H., Rusinkiewicz, S., Zickler, T.: Principles of appearance acquisition and representation. Found. Trends Comput. Graph. Vis. 4, 75–191 (2009)

    Article  Google Scholar 

  42. Klinker, G., Shafer, S., Kanade, T.: A physical approach to color image understanding. Int. J. Comput. Vis. 4, 7–38 (1990)

    Article  Google Scholar 

  43. Bajcsy, R., Lee, S., Leonardis, A.: Detection of diffuse and specular interface reflections and inter-reflections by color image segmentation. Int. J. Comput. Vis. 17, 241–272 (1996)

    Article  Google Scholar 

  44. Tan, R., Ikeuchi, K.: Separating reflection components of textured surfaces using a single image. IEEE Trans. Pattern Anal. Mach. Intell. 27(2), 178–193 (2005)

    Google Scholar 

  45. Mallick, S.P., Zickler, T., Belhumeur, P.N., Kriegman, D.J.: Specularity removal in images and videos: a pde approach. In: Computer Vision-ECCV 2006, pp. 550–563, Springer, Berlin (2006)

  46. Angelopoulou, E.: Specular highlight detection based on the fresnel reflection coefficient. In: EEE 11th International Conference on Computer Vision. ICCV 2007. I, pp. 1–8 (2007)

  47. Chung, Y., Chang, S., Cherng, S., Chen, S.: Dichromatic reflection separation from a single image. Lect. Notes Comput. Sci. 4679, 225 (2007)

    Article  Google Scholar 

  48. Adato, Y., Ben-Shahar, O.: Specular flow and shape in one shot. In: BMVC, pp. 1–11

  49. Szeliski, R.: Computer vision: algorithms and applications. Springer, New York (2010)

  50. Adato, Y., Vasilyev, Y., Ben Shahar, O., Zickler, T.: Toward a theory of shape from specular flow. In: ICCV07, pp. 1–8

  51. Adato, Y., Zickler, T., Ben-Shahar, O.: Toward robust estimation of specular flow. In: Proceedings of the British Machine Vision Conference, p. 1 (2010)

  52. Vasilyev, Y., Adato, Y., Zickler, T., Ben-Shahar, O.: Dense specular shape from multiple specular flows. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR, pp. 1–8 (2008)

  53. Oo, T., Kawasaki, H., Ohsawa, Y., Ikeuchi, K.: The separation of reflected and transparent layers from real-world image sequence. Mach. Vis. Appl. 18, 17–24 (2007)

    Article  Google Scholar 

  54. Adato, Y., Vasilyev, Y., Zickler, T., Ben-Shahar, O.: Shape from specular flow. IEEE Trans. Pattern Anal. Mach. Intell. 32, 2054–2070 (2010)

    Article  Google Scholar 

  55. Blake, A., Bulthoff, H.: Shape from specularities: computation and psychophysics. Philos. Trans. Soc. Lond. Ser. B Biol. Sci. 331, 237–252 (1991)

    Article  Google Scholar 

  56. Lowe, D.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91–110 (2004)

    Article  Google Scholar 

  57. Horn, B.K., Schunck, B.G.: Determining optical flow. Artif. Intell. 17, 185–203 (1981)

    Article  Google Scholar 

  58. Adato, Y., Zickler, T., Ben-Shahar, O.: A polar representation of motion and implications for optical flow. In: IEEE Conference on IEEE Computer Vision and Pattern Recognition (CVPR), pp. 1145–1152 (2011)

  59. Cordes, K., Muller, O., Rosenhahn, B., Ostermann, J.: Half-sift: high-accurate localized features for sift. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2009. CVPR Workshops. IEEE Computer Society, pp. 31–38 (2009)

  60. Toews, M., Wells, W.: Sift-rank: ordinal description for invariant feature correspondence. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009, pp. 172–177 (2009)

  61. Farid, H., Popescu, A.: Blind removal of lens distortions. J. Opt. Soc. Am. 18, 2072–2078 (2001)

    Article  Google Scholar 

  62. Clark, A., Grant, R., Green, R.: Perspective Correction for improved visual registration using natural features. In: 23rd International Conference Image and Vision Computing New Zealand (IVCNZ 2008). IEEE Computer Press, Los Alamitos (2008)

  63. Szeliski, R., Avidan, S., Anandan, P.: Layer extraction from multiple images containing reflections and transparency. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vol. 1, pp. 246–253. IEEE, USA (2000)

  64. Vedaldi, A., Fulkerson, B.: Vlfeat : an open and portable library of computer vision algorithms. In: Proceedings of the international conference on Multimedia, ACM, pp. 1469–1472

  65. Sankaranarayanan, A.C., Veeraraghavan, A., Tuzel, O., Agrawal, A.: Specular surface reconstruction from sparse reflection correspondences. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, pp. 1245–1252 (2010)

  66. Beis, J., Lowe, D.: Shape indexing using approximate nearest-neighbour search in high-dimensional spaces. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1000–1006 (1997)

  67. Fischler, M., Bolles, R.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24, 381–395 (1981)

    Article  MathSciNet  Google Scholar 

  68. Hartley, R., Gupta, R., Chang, T.: Stereo from uncalibrated cameras. In: Proceedings CVPR’92, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 761–764 (1992)

  69. Sampson, P.: Fitting conic sections to. Comput. Graph. Image Process. 18, 97–108 (1982)

    Article  Google Scholar 

  70. Blinn, J.: Models of light reflection for computer synthesized pictures. In: ACM SIGGRAPH Computer Graphics, Vol. 11, ACM, pp. 192–198

  71. Doerschner, K., Kersten, D., Schrater, P.: Analysis of shape-dependent specular motion—predicting shiny and matte appearance. J. Vis. 8, 594–594 (2008)

    Article  Google Scholar 

  72. Gautama, T., Van Hulle, M.: A phase-based approach to the estimation of the optical flow field using spatial filtering. IEEE Trans. Neural Netw. 13, 1127–1136 (2002)

    Article  Google Scholar 

  73. Netz, A., Osadchy, M.: Using specular highlights as pose invariant features for 2d–3d pose estimation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, pp. 721–728 (2011)

Download references

Acknowledgments

This work was supported by a Marie Curie International Reintegration Grant (239494) within the Seventh European Community Framework Programme awarded to KD. KD has also been supported by a Turkish Academy of Sciences Young Scientist Award (TUBA GEBIP), a grant by the Scientific and Technological Research Council of Turkey (TUBITAK 1001, 112K069), and the EU Marie Curie Initial Training Network PRISM (FP7-PEOPLE-2012-ITN, Grant Agreement: 316746).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ozgur Yilmaz.

Appendix

Appendix

1.1 Algorithm parameters

  • SIFT peak threshold = 3

  • SIFT edge threshold = 10

  • SIFT feature elimination threshold = 5

  • SIFT matching threshold = 2

  • RANSAC iteration = 2,000

  • Sampson error = 0.02

  • Convolution kernel size = 60

  • Convolution kernel, Gaussian standard deviation = 30

  • Specular field threshold = \(1.5 \times 10^{-6}\)

  • Connected component area threshold = 1,000

1.2 Optic flow experiments

For the optical flow-based detection experiment, we kept the parameters the same as in [1]. However, we used 5% of the optical flow vectors for epipolar deviation computation. The Sampson error, kernel size and standard deviation are identical to the ones used for the SIFT-based method.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Yilmaz, O., Doerschner, K. Detection and localization of specular surfaces using image motion cues. Machine Vision and Applications 25, 1333–1349 (2014). https://doi.org/10.1007/s00138-014-0610-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-014-0610-9

Keywords

Navigation