Skip to main content

Suppressing Style-Sensitive Features via Randomly Erasing for Domain Generalizable Semantic Segmentation

  • Conference paper
  • First Online:
Book cover Pattern Recognition and Computer Vision (PRCV 2021)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 13022))

Included in the following conference series:

  • 1795 Accesses

Abstract

Domain generalization aims to enhance robustness of models to different domains, which is crucial for safety-critical systems in practice. In this paper, we propose a simple plug-in module to promote the ability of generalization for semantic segmentation networks without extra loss function. Firstly, we rethink the relationship between semantics and style in the sight of feature maps, and divide the channels of them into two kinds (i.e. style-sensitive channels and semantic-sensitive channels) via the variance of Gram matrix. Secondly, with the assumption that the domain shift mainly lies in style, a random erasure method is proposed to style-sensitive-channel features with the hope of learning domain invariant features and preventing model from over-fitting to specific domain. Extensive experiments demonstrate that the generalization of our proposed method outperforms existing approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Blanchard, G., Lee, G., Scott, C.: Generalizing from several related classification tasks to a new unlabeled sample. Adv. Neural Inf. Process. Syst. 24, 2178–2186 (2011)

    Google Scholar 

  2. Yue, X., Zhang. Y., Zhao, S., et al.: Domain randomization and pyramid consistency: simula-tion-to-real generalization without accessing target domain data. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2100–2110 (2019)

    Google Scholar 

  3. Pan, X., Luo, P., Shi, J., et al.: Two at once: enhancing learning and generalization capacities via ibn-net. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 464–479 (2018)

    Google Scholar 

  4. Volpi, R., Murino, V.: Addressing model vulnerability to distributional shifts over image transformation sets. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7980–7989 (2019)

    Google Scholar 

  5. Li, Y., Tian, X., Gong, M., et al.: Deep domain generalization via conditional invariant adver-sarial networks. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 624–639 (2018)

    Google Scholar 

  6. Li, H., Pan, S.J., Wang, S., et al.: Domain generalization with adversarial feature learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5400–5409 (2018)

    Google Scholar 

  7. Choi, S., Jung, S., Yun, H., et al.: RobustNet: improving domain generalization in urban-scene segmentation via instance selective whitening. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11580–11590 (2021)

    Google Scholar 

  8. Ulyanov, D., Vedaldi, A., Lempitsky, V.: Instance normalization: the missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022 (2016)

  9. Ioffe, S, Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456. PMLR (2015)

    Google Scholar 

  10. Ghifary, M., Kleijn, W.B., Zhang, M., et al.:Domain generalization for object recognition with multi-task autoencoders. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2551–2559 (2015)

    Google Scholar 

  11. Muande, K., Balduzzi, D., Schölkopf, B.: Domain generalization via invariant feature representation. In: International Conference on Machine Learning, pp. 10–18. PMLR (2013)

    Google Scholar 

  12. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmenta-tion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)

    Google Scholar 

  13. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  14. Badrinarayanan, V., Kendall, A., Cipolla, R.: Segnet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Patt. Anal. Mach. Intell. 39(12), 2481–2495 (2017)

    Article  Google Scholar 

  15. Chen, L.C., Papandreou, G., Kokkinos, I., et al.: Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFS. IEEE Trans. Patt. Aanal. Mach. Intell. 40(4), 834–848 (2017)

    Article  Google Scholar 

  16. He, K., Zhang, X., Ren, S., et al.: Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1904–1916 (2015)

    Article  Google Scholar 

  17. Zhao, H., Shi, J., Qi, X., et al.: Pyramid scene parsing network. In: Proceedings of the IEEE Conference on Computer Vsion and Pattern Recognition, pp. 2881–2890 (2017)

    Google Scholar 

  18. Chen, L.C., Papandreou, G., Schroff, F., et al.: Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587 (2017)

  19. Chen, L.C., Zhu, Y., Papandreou, G., et al.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 801–818 (2018)

    Google Scholar 

  20. Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. arXiv pre-print arXiv:1511.07122 (2015)

  21. Paszke, A., Gross, S., Massa, F., et al.: Pytorch: an imperative style, high-performance deep learning library. arXiv preprint arXiv:1912.01703 (2019)

  22. Cordts, M., Omran, M., Ramos, S., et al.: The cityscapes dataset for semantic urban scene understanding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3213–3223 (2016)

    Google Scholar 

  23. Yu, F., Chen, H., Wang, X., et al.: Bdd100k: a diverse driving dataset for heterogeneous multitask learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2636–2645 (2020)

    Google Scholar 

  24. Neuhold, G., Ollmann, T., Rota Bulo, S., et al.: The mapillary vistas dataset for semantic understanding of street scenes. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4990–4999 (2017)

    Google Scholar 

  25. Ros, G., Sellart, L., Materzynska, J., et al.: The synthia dataset: a large collection of synthetic images for semantic segmentation of urban scenes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3234–3243 (2016)

    Google Scholar 

  26. Richter, S.R., Vineet, V., Roth, S., Koltun, V.: Playing for data: ground truth from computer games. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 102–118. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_7

    Chapter  Google Scholar 

  27. Liu, W., Rabinovich, A., Berg, A.C.: Parsenet: looking wider to see better. arXiv preprint arXiv:1506.04579 (2015)

  28. Li, Y., Fang, C., Yang, J., et al.: Universal style transfer via feature transforms. arXiv preprint arXiv:1705.08086 (2017)

Download references

Acknowledgement

This work is partially supported by National Natural Science Foundation of China (Grants no. 61772568), Guangdong Basic and Applied Basic Research Foundation (Grant no. 2019A1515012029), and Youth science and technology innovation talent of Guangdong Special Support Program.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Meng Yang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Su, S., Wang, H., Yang, M. (2021). Suppressing Style-Sensitive Features via Randomly Erasing for Domain Generalizable Semantic Segmentation. In: Ma, H., et al. Pattern Recognition and Computer Vision. PRCV 2021. Lecture Notes in Computer Science(), vol 13022. Springer, Cham. https://doi.org/10.1007/978-3-030-88013-2_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88013-2_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88012-5

  • Online ISBN: 978-3-030-88013-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics