Abstract
Object recognition from the images with occlusion was a difficult problem. In this paper, we attempted to solve this problem by image completion. In order to interpolate the hidden regions of objects from the surrounding image data, this paper proposed a new pix2pix type image generation network, in which the transformer was used instead of the convolution network in the generator. In this model, U-Net composed of the transformer blocks encodes the contextual relationships between the pixels, and the followed additional transformer block generate the interpolated image using them. Since the convolution was based on the global features, it could not interpolate the images if the missing regions were large. By replacing it with transformer, it was possible to analogize the missing regions from surrounding pixels. The effectiveness of the proposed method was confirmed by image interpolation experiments for several images with occlusions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Iizuka, S., Simo-Serra, E., Ishikawa, H.: Globally and locally consistent image completion. ACM Trans. Graph. (ToG) 36(4), 1–14 (2017)
Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)
Vaswani, A., et al.: Attention is all you need. arXiv:1706.03762. Submitted on 12 Jun 2017, Accessed 6 Dec 2017
Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. In: ICLR 2021 Conference Paper1909, Submitted on 22 Oct 2020, Accessed 3 Jun 2021
Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. arXiv:2103.14030, Submitted on 25 Mar 2021, Accessed 17 Aug 2021
Cao, H., et al.: Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation. arXiv:2105.05537, Submitted on 12 May 2021
Krizhevsky, A.: Learning multiple layers of features from tiny image, Technical report (2009)
Calli, B., Walsman, A., Singh, A., Srinivasa, S., Abbeel, P., Dollar, A.M.: Benchmarking in manipulation research: the YCB object and model set and benchmarking protocols. IEEE Rob. Autom. Mag. (2015)
Calli, B., et al.: Yale-CMU-Berkeley dataset for robotic manipulation research. Int. J. Rob. Res. 36(3) (2017)
Calli, B., Singh, A., Walsman, A., Srinivasa, S., Abbeel, P., Dollar, A.M.: The YCB object and model set: towards common bench marks for manipulation research. In: Proceedings of the 2015 IEEE International Conference on Advanced Robotics (ICAR), Istanbul, Turkey (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Matsuura, T., Nakayama, T. (2023). Estimation of Occlusion Region Using Image Completion by Network Model Consisting of Transformer and U-Net. In: Ossowski, S., Sitek, P., Analide, C., Marreiros, G., Chamoso, P., Rodríguez, S. (eds) Distributed Computing and Artificial Intelligence, 20th International Conference. DCAI 2023. Lecture Notes in Networks and Systems, vol 740. Springer, Cham. https://doi.org/10.1007/978-3-031-38333-5_2
Download citation
DOI: https://doi.org/10.1007/978-3-031-38333-5_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-38332-8
Online ISBN: 978-3-031-38333-5
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)