Skip to main content
Log in

Background segmentation in multicolored illumination environments

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

We present an algorithm for the segmentation of images into background and foreground regions. The proposed algorithm utilizes a physically based formulation of scene appearance which explicitly models the formation of shadows originating from color light sources. This formulation enables a probabilistic model to distinguish between shadows and foreground objects in challenging images. A key component of the proposed method is an algorithm for estimating the illumination arriving at the scene. We evaluate our algorithm using synthetic and real-world data and show that the proposed method performs favorably against other commonly used segmentation methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Notes

  1. Any shadows in the background image are treated as shading and are effectively ignored by the algorithm.

  2. Other methods include panoramic stitching [2] and light probe photography [26].

  3. On standard 8-bit images, the weights can be determined automatically since pixel values reside in [0, 255] making it easy to normalize their magnitude with the image dimensions. For high dynamic range images, this is not the case as the dynamic range of different images can vary by multiple orders of magnitude.

  4. This projection depends on the type of fisheye lens used and may require calibration [21].

  5. Alternatively we can transform the image to the \(L^*a^*b^*\) color space and segment on the \(a^*\) and \(b^*\) components.

  6. In the first example, the blue component does not match the ground truth as there is no significant blue tint in the input image and so the recovery method can select arbitrary values for the blue channel without affecting the outcome.

References

  1. Barnich, O., Van Droogenbroeck, M.: ViBe: a universal background subtraction algorithm for video sequences. IEEE Trans. Image Process. 20(6), 1709–1724 (2011)

    Article  MathSciNet  Google Scholar 

  2. Brown, M., Lowe, D.G.: Automatic panoramic image stitching using invariant features. Int. J. Comput. Vis. 74(1), 59–73 (2007)

    Article  Google Scholar 

  3. Chen, C.-C., Aggarwal, J.: Human shadow removal with unknown light source. In: 20th International Conference on Pattern Recognition, Istanbul, Turkey, vol. 27, pp. 2407–2410, August (2010)

  4. Chen, Z., Xu, Q., Cong, R., Huang, Q.: Global context-aware progressive aggregation network for salient object detection (2020). arXiv preprint arXiv:2003.00651

  5. Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: The European Conference on Computer Vision (ECCV), September (2018)

  6. Cong, R., Lei, J., Fu, H., Cheng, M.-M., Lin, W., Huang, Q.: Review of visual saliency detection with comprehensive information. IEEE Trans. Circ. Syst. Video Technol. 29(10), 2941–2959 (2018)

    Article  Google Scholar 

  7. Cong, R., Lei, J., Fu, H., Huang, Q., Cao, X., Ling, N.: HSCS: hierarchical sparsity based co-saliency detection for RGBD images. IEEE Trans. Multimed. 21(7), 1660–1671 (2018)

    Article  Google Scholar 

  8. Cong, R., Lei, J., Fu, H., Lin, W., Huang, Q., Cao, X., Hou, C.: An iterative co-saliency framework for RGBD images. IEEE Trans. Cybern. 49(1), 233–246 (2017)

    Article  Google Scholar 

  9. Cong, R., Lei, J., Fu, H., Porikli, F., Huang, Q., Hou, C.: Video saliency detection via sparsity-based reconstruction and propagation. IEEE Trans. Image Process. 28(10), 4819–4831 (2019)

    Article  MathSciNet  Google Scholar 

  10. Damelin, S., Hoang, N.: On surface completion and image inpainting by biharmonic functions: numerical aspects. Int. J. Math. Math. Sci. 2018, 1–8 (2018)

  11. Duncan, K., Sarkar, S.: Saliency in images and video: a brief survey. IET Comput. Vis. 6(6), 514–523 (2012)

    Article  Google Scholar 

  12. Funt, B.V., Drew, M.S., Brockington, M.: Recovering shading from color images. In: Sandini, G. (ed.) Computer Vision—ECCV’92, pp. 124–132. Springer, Berlin, Heidelberg (1992)

    Google Scholar 

  13. Garces, E., Munoz, A., Lopez-Moreno, J., Gutierrez, D.: Intrinsic images by clustering. Comput. Graph. Forum 31(4), 1415–1424 (2012)

    Article  Google Scholar 

  14. Garcia-Garcia, A., Orts-Escolano, S., Oprea, S., Villena-Martinez, V., Martinez-Gonzalez, P., Garcia-Rodriguez, J.: A survey on deep learning techniques for image and video semantic segmentation. Appl. Soft Comput. 70, 41–65 (2018)

    Article  Google Scholar 

  15. Godbehere, A.B., Goldberg, K.: Algorithms for visual tracking of visitors under variable-lighting conditions for a responsive audio art installation. In: LaViers, A., Egerstedt, M. (eds.) Controls and Art, pp. 181–204. Springer, Cham (2014)

    Chapter  Google Scholar 

  16. Guo, L., Xu, D., Qiang, Z.: Background subtraction using local SVD binary pattern. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 86–94 (2016)

  17. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969 (2017)

  18. Hsieh, J.-W., Hu, W.-F., Chang, C.-J., Chen, Y.-S.: Shadow elimination for effective moving object detection by Gaussian shadow modeling. Image Vis. Comput. 21(6), 505–516 (2003)

    Article  Google Scholar 

  19. Huerta, I., Amato, A., Roca, X., González, J.: Exploiting multiple cues in motion segmentation based on background subtraction. Neurocomputing 100, 183–196 (2013)

    Article  Google Scholar 

  20. Kaimakis, P., Tsapatsoulis, N.: Background modeling methods for visual detection of maritime targets. In: In Proc. Int. Workshop on Analysis and Retrieval of Tracked Events and Motion in Imagery Stream, Barcelona, Spain, pp. 67–76 (2013)

  21. Kannala, J., Brandt, S.S.: A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses. IEEE Trans. Pattern Anal. Mach. Intell. 28(8), 1335–1340 (2006)

    Article  Google Scholar 

  22. Khan, S.H., Bennamoun, M., Sohel, F., Togneri, R.: Automatic feature learning for robust shadow detection. In: Proc. Conf. Comp. Vision and Pattern Recognition, Columbus, OH, USA, pp. 1939–1946, June (2014)

  23. Ladas, N., Kaimakis, P., Chrysanthou, Y.: Probabilistic background modelling for sports video segmentation. In: Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications—Volume 4: VISAPP, (VISIGRAPP 2017), pp. 517–525. INSTICC, SciTePress (2017)

  24. Leone, A., Distante, C.: Shadow detection for moving objects based on texture analysis. Pattern Recogn. 40(4), 1222–1233 (2007)

    Article  Google Scholar 

  25. Levin, A., Lischinski, D., Weiss, Y.: A Closed-form solution to natural image matting. IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 228–242 (2008)

    Article  Google Scholar 

  26. Levoy, M., Hanrahan, P.: Light field rendering. In: Proceedings of the 23rd annual conference on Computer Graphics and Interactive Techniques, pp. 31–42. ACM (1996)

  27. Li, C., Cong, R., Hou, J., Zhang, S., Qian, Y., Kwong, S.: Nested network with two-stream pyramid for salient object detection in optical remote sensing images. IEEE Trans. Geosci. Remote Sens. 57(11), 9156–9166 (2019)

    Article  Google Scholar 

  28. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)

  29. Perazzi, F., Krahenbuhl, P., Pritch, Y., Hornung, A.: Saliency filters: contrast based filtering for salient region detection. In: Proc. Conf. on Comp. Vision and Pattern Recognition, pp. 733–740, June (2012)

  30. Pharr, M., Jakob, W., Humphreys, G.: Physically Based Rendering: From Theory to Implementation. Morgan Kaufmann, San Francisco (2016)

    Google Scholar 

  31. Reinhard, E., Heidrich, W., Debevec, P., Pattanaik, S., Ward, G., Myszkowski, K.: High dynamic range imaging: acquisition, display, and image-based lighting. Morgan Kaufmann (2010)

  32. Sanin, A., Sanderson, C., Lovell, B.C.: Improved shadow removal for robust person tracking in surveillance scenarios. In: Proc. Int. Conf. on Pattern Recognition, pp. 141–144, August (2010)

  33. Sanin, A., Sanderson, C., Lovell, B.C.: Shadow detection: a survey and comparative evaluation of recent methods. Pattern Recogn. 45(4), 1684–1695 (2012)

    Article  Google Scholar 

  34. Shahrian, E., Rajan, D., Price, B., Cohen, S.: Improving image matting using comprehensive sampling sets. In: Proc. Conf. Computer Vision and Pattern Recognition, pp. 636–643, June (2013)

  35. Van der Walt, S., Schönberger, J.L., Nunez-Iglesias, J., Boulogne, F., Warner, J.D., Yager, N., Gouillart, E., Yu, T.: scikit-image: image processing in python. PeerJ 2, e453 (2014)

    Article  Google Scholar 

  36. Wang, J., Cohen, M.F.: Image and video matting: a survey. Found. Trends Comput. Graph. Vis. 3(2), 97–175 (2007)

    Article  Google Scholar 

  37. Yan, Q., Xu, L., Shi, J., Jia, J.: Hierarchical saliency detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1155–1162 (2013)

  38. Zivkovic, Z.: Improved adaptive Gaussian mixture model for background subtraction. In: Proc. 17th International Conf. on Pattern Recognition, Washington, DC, USA, vol. 2, pp. 28–31 (2004)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nikolas Ladas.

Ethics declarations

Conflict of Interest

Nikolas Ladas declares that he has no conflict of interest. Paris Kaimakis declares that he has no conflict of interest. Yiorgos Chrysanthou declares that he has no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ladas, N., Kaimakis, P. & Chrysanthou, Y. Background segmentation in multicolored illumination environments. Vis Comput 37, 2221–2233 (2021). https://doi.org/10.1007/s00371-020-01981-8

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-020-01981-8

Keywords

Navigation