Abstract
Hand segmentation, as a key part of human-computer interaction, palmprint recognition and gesture recognition, is prone to interference from complex backgrounds subsequently resulting in poor segmentation accuracy. In this paper, a paired spatial U-Net (PSU-Net) hand image segmentation network is proposed. Firstly, we improve the traditional dilated pyramid pooling into the paired spatial pyramid pooling (PSPP) module. Through low-dimensional feature pairing, the PSPP can exploit the low-dimensional feature information, thus enhancing the network’s ability to capture edge detail information. Then we design the global attention fusion module (GAF), which can efficiently combine low-dimensional spatial details and high-dimensional semantic information to solve blurred edges in complex backgrounds. Some experimental results on HOF, GTEA and Egohands databases show the proposed approach has quite good performance. The mIOU of PSU-Net can achieve 76.19% on HOF dataset, while the mIOU of DeeplabV3 is 74.45%.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Zhang, D., Kong, W.K., You, J., et al.: Online palmprint identification. IEEE Tran. Pattern Anal. Mach. Intell. 25(9), 1041–1050 (2003)
Han, Y., Sun, Z., Wang, F., Tan, T.: Palmprint recognition under unconstrained scenes. In: Yagi, Y., Kang, S.B., Kweon, I.S., Zha, H. (eds.) ACCV 2007. LNCS, vol. 4844, pp. 1–11. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-76390-1_1
Shuping, L., Yu, L., et al.: Hierarchical static hand gesture recognition by combining finger detection and HOG features. J. Image Graph. 20(6), 781–788 (2015)
Rahmat, R.F., Chairunnisa, T., Gunawan, D., et al.: Skin color segmentation using multi-color space threshold. In: 2016 3rd International Conference on Computer and Information Sciences, pp. 391–396 (2016)
Li, S., Tang, J., Wu, G.: Geologic surface reconstruction based on fault constraints. In: Second Workshop on Digital Media and its Application in Museum and Heritages (2007)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Wang, K., Liew, J.H., Zou, Y., et al.: PANet: few-shot image semantic segmentation with prototype alignment. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9197–9206 (2019)
Iandola, F.N., Han, S., Moskewicz, M.W., et al.: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5Â MB model size. arXiv preprint arXiv:1602.07360 (2016)
Howard, A.G., Zhu, M., Chen, B., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)
He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Sandler, M., Howard, A., Zhu, M., et al.: MobileNetV2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018)
Zhang, X., Zhou, X., Lin, M., et al.: ShuffleNet: an extremely efficient convolutional neural network for mobile devices. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6848–6856 (2018)
Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)
Badrinarayanan, V., Kendall, A., Cipolla, R.: SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017)
Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell 40(5), 834–848 (2017)
Chen, L.C, Papandreou, G., Schroff, F.: Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587 (2017)
Zhao, H., Shi, J., Qi, X., et al.: Pyramid scene parsing network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)
Lin, T.Y., Dollár, P., Girshick, R., et al.: Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)
Mnih, V., Heess, N., Graves, A.: Recurrent models of visual attention. In: Advances in Neural Information Processing Systems 27 (2014)
Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018)
Woo, S., Park, J., Lee, J.Y., et al.: CBAM: convolutional block attention module. In: Proceedings of the European Conference on Computer Vision, pp. 3–19 (2018)
Urooj, A., Borji, A.: Analysis of hand segmentation in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4710–4719 (2018)
Fathi, A., Ren, X., Rehg, J.M.: Learning to recognize objects in egocentric activities. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3281–3288 (2011)
Bambach, S., Lee, S., Crandall, D.J., et al.: Lending a hand: detecting hands and recognizing activities in complex egocentric interactions. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1949–1957 (2015)
Acknowledgements
This research is funded by the National Natural Science Foundation of China under Grant 61976224 and 61976088. We would also like to thank University of Central Florida, GeorgiaTech, Indiana University for sharing the hand image databases.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zhou, K., Qi, W., Gui, Z., Zeng, Q. (2022). PSU-Net: Paired Spatial U-Net for Hand Segmentation with Complex Backgrounds. In: Yu, S., et al. Pattern Recognition and Computer Vision. PRCV 2022. Lecture Notes in Computer Science, vol 13535. Springer, Cham. https://doi.org/10.1007/978-3-031-18910-4_44
Download citation
DOI: https://doi.org/10.1007/978-3-031-18910-4_44
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-18909-8
Online ISBN: 978-3-031-18910-4
eBook Packages: Computer ScienceComputer Science (R0)