Abstract
We address RGB road scene material segmentation, i.e., per-pixel segmentation of materials in real-world driving views with pure RGB images, by building a new tailored benchmark dataset and model for it. Our new dataset, KITTI-Materials, based on the well-established KITTI dataset, consists of 1000 frames covering 24 different road scenes of urban/suburban landscapes, annotated with one of 20 material categories for every pixel in high quality. It is the first dataset tailored to RGB material segmentation in realistic driving scenes which allows us to train and test any RGB material segmentation model. Based on an analysis on KITTI-Materials, we identify the extraction and fusion of texture and context as the key to robust road scene material appearance. We introduce Road scene Material Segmentation Network (RMSNet), a new Transformer-based framework which will serve as a baseline for this challenging task. RMSNet encodes multi-scale hierarchical features with self-attention. We construct the decoder of RMSNet based on a novel lightweight self-attention model, which we refer to as SAMixer. SAMixer achieves adaptive fusion of informative texture and context cues across multiple feature levels. It also significantly accelerates self-attention for feature fusion with a balanced query-key similarity measure. We also introduce a built-in bottleneck of local statistics to achieve further efficiency and accuracy. Extensive experiments on KITTI-Materials validate the effectiveness of our RMSNet. We believe our work lays a solid foundation for further studies on RGB road scene material segmentation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bell, S., Upchurch, P., Snavely, N., Bala, K.: Material recognition in the wild with the materials in context database. In: Proceedings of the CVPR, pp. 3479–3487 (2015)
Brown, M., et al.: Large-scale public LiDAR and satellite image data set for urban semantic labeling. In: Laser Radar Technology and Applications XXIII, vol. 10636 (2018)
Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. TPAMI 40(4), 834–848 (2018)
Chen, L.C., Papandreou, G., Schroff, F., Adam, H.: Rethinking Atrous Convolution for Semantic Image Segmentation. arXiv preprint arXiv:1706.05587 (2017)
Chen, L.C., Yang, Y., Wang, J., Xu, W., Yuille, A.L.: Attention to scale: scale-aware semantic image segmentation. In: Proceedings of the CVPR, pp. 3640–3649 (2016)
Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 833–851. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_49
Choi, S., Kim, J.T., Choo, J.: Cars can’t fly up in the sky: improving urban-scene segmentation via height-driven attention networks. In: Proceedings of the CVPR, pp. 9373–9383 (2020)
Cordts, M., et al.: The cityscapes dataset for semantic urban scene understanding. In: Proceedings of the CVPR, pp. 3213–3223 (2016)
Demir, I., et al.: DeepGlobe 2018: a challenge to parse the earth through satellite images. In: Proceedings of the CVPR Workshop, pp. 172–17209 (2018)
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: Proceedings of the CVPR, pp. 248–255 (2009)
Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. In: Proceedings of the ICLR (2021)
Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The KITTI vision benchmark suite. In: Proceedings of the CVPR (2012)
Graham, B., et al.: LeViT: a vision transformer in ConvNet’s clothing for faster inference. In: Proceedings of the ICCV (2021)
Han, K., Xiao, A., Wu, E., Guo, J., Xu, C., Wang, Y.: Transformer in transformer. In: Proceedings of the NeurIPS (2021)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the CVPR, pp. 770–778 (2016)
Hendrycks, D., Gimpel, K.: Gaussian error linear units (GELUs). arXiv preprint arXiv:1606.08415 (2016)
Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the CVPR (2018)
Huang, Z., Wang, X., Huang, L., Huang, C., Wei, Y., Liu, W.: CCNet: criss-cross attention for semantic segmentation. In: Proceedings of the ICCV, pp. 603–612 (2019)
Kraehenbuehl, P., Koltun, V.: Parameter learning and convergent inference for dense random fields. In: Proceedings of the ICML, pp. 513–521 (2013)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Proceedings of the NeurIPS, pp. 1097–1105 (2012)
Li, X., Wang, W., Hu, X., Yang, J.: Selective kernel networks. In: Proceedings of the CVPR, pp. 510–519 (2019)
Liang, Y., Wakaki, R., Nobuhara, S., Nishino, K.: Multimodal material segmentation. In: Proceedings of the CVPR (2022)
Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the ICCV (2021)
Neuhold, G., Ollmann, T., Bulo, S.R., Kontschieder, P.: The mapillary vistas dataset for semantic understanding of street scenes. In: Proceedings of the ICCV, pp. 5000–5009 (2017)
Perronnin, F., Sánchez, J., Mensink, T.: Improving the fisher kernel for large-scale image classification. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 143–156. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15561-1_11
Purri, M., et al.: Material segmentation of multi-view satellite imagery. arXiv preprint arXiv:1904.08537 (2019)
Ramachandran, P., Parmar, N., Vaswani, A., Bello, I., Levskaya, A., Shlens, J.: Stand-alone self-attention in vision models. In: Proceedings of the NeurIPS, vol. 32, pp. 68–80 (2019)
Reda, F.A., et al.: SDC-Net: video prediction using spatially-displaced convolution. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 747–763. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_44
Schwartz, G., Nishino, K.: Visual material traits: recognizing per-pixel material context. In: IEEE Color and Photometry in Computer Vision Workshop (2013)
Schwartz, G., Nishino, K.: Automatically discovering local visual material attributes. In: Proceedings of the CVPR (2015)
Schwartz, G., Nishino, K.: Integrating local material recognition with large-scale perceptual attribute discovery. arXiv preprint arXiv:1604.01345 (2016)
Schwartz, G., Nishino, K.: Material recognition from local appearance in global context. arXiv preprint arXiv:1611.09394 (2016)
Schwartz, G., Nishino, K.: Recognizing material properties from images. TPAMI 42(8), 1981–1995 (2020)
Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the CVPR, pp. 1–9 (2015)
Vaswani, A., et al.: Attention is all you need. In: Proceedings of the NeurIPS, vol. 30, pp. 5998–6008 (2017)
Wang, H., Zhu, Y., Green, B., Adam, H., Yuille, A., Chen, L.-C.: Axial-DeepLab: stand-alone axial-attention for panoptic segmentation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12349, pp. 108–126. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58548-8_7
Wang, W., et al.: Pyramid vision transformer: a versatile backbone for dense prediction without convolutions. In: Proceedings of the ICCV (2021)
Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proceedings of the CVPR, pp. 7794–7803 (2018)
Wu, H., et al.: CvT: introducing convolutions to vision transformers. In: Proceedings of the ICCV, pp. 22–31 (2021)
Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: SegFormer: simple and efficient design for semantic segmentation with transformers. In: Proceedings of the NeurIPS (2021)
Xue, J., Purri, M., Dana, K.: Angular luminance for material segmentation. In: IGARSS 2020–2020 IEEE International Geoscience and Remote Sensing Symposium (2020)
Xue, J., Zhang, H., Dana, K., Nishino, K.: Differential angular imaging for material recognition. In: Proceedings of the CVPR, pp. 6940–6949 (2017)
Xue, J., Zhang, H., Nishino, K., Dana, K.: Differential viewpoints for ground terrain material recognition. TPAMI 44, 1205–1218 (2020)
Yang, T.J., et al.: DeeperLab: single-shot image parser. arXiv preprint arXiv:1902.05093 (2019)
Zhang, H., Goodfellow, I.J., Metaxas, D.N., Odena, A.: Self-attention generative adversarial networks. In: Proceedings of the ICML, pp. 7354–7363 (2018)
Zhang, H., Dana, K., Nishino, K.: Reflectance hashing for material recognition. In: Proceedings of the CVPR, pp. 3071–3080 (2015)
Zhang, H., et al.: Context encoding for semantic segmentation. In: Proceedings of the CVPR, pp. 7151–7160 (2018)
Zhang, H., Xue, J., Dana, K.: Deep TEN: texture encoding network. In: Proceedings of the CVPR, pp. 2896–2905 (2017)
Zhang, H., Zhang, H., Wang, C., Xie, J.: Co-occurrent features in semantic segmentation. In: Proceedings of the CVPR, pp. 548–557 (2019)
Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network. In: Proceedings of the CVPR (2017)
Zhou, J., Jampani, V., Pi, Z., Liu, Q., Yang, M.H.: Decoupled dynamic filter networks. In: Proceedings of the CVPR (2021)
Zhu, Y., et al.: Improving semantic segmentation via video propagation and label relaxation. In: Proceedings of the CVPR, pp. 8856–8865 (2019)
Acknowledgements
This work was in part supported by JSPS 20H05951, 21H04893, JST JPMJCR20G7, and SenseTime Japan.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Cai, S., Wakaki, R., Nobuhara, S., Nishino, K. (2023). RGB Road Scene Material Segmentation. In: Wang, L., Gall, J., Chin, TJ., Sato, I., Chellappa, R. (eds) Computer Vision – ACCV 2022. ACCV 2022. Lecture Notes in Computer Science, vol 13842. Springer, Cham. https://doi.org/10.1007/978-3-031-26284-5_16
Download citation
DOI: https://doi.org/10.1007/978-3-031-26284-5_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-26283-8
Online ISBN: 978-3-031-26284-5
eBook Packages: Computer ScienceComputer Science (R0)