Skip to main content

Estimation of Occlusion Region Using Image Completion by Network Model Consisting of Transformer and U-Net

  • Conference paper
  • First Online:
Distributed Computing and Artificial Intelligence, 20th International Conference (DCAI 2023)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 740))

  • 289 Accesses

Abstract

Object recognition from the images with occlusion was a difficult problem. In this paper, we attempted to solve this problem by image completion. In order to interpolate the hidden regions of objects from the surrounding image data, this paper proposed a new pix2pix type image generation network, in which the transformer was used instead of the convolution network in the generator. In this model, U-Net composed of the transformer blocks encodes the contextual relationships between the pixels, and the followed additional transformer block generate the interpolated image using them. Since the convolution was based on the global features, it could not interpolate the images if the missing regions were large. By replacing it with transformer, it was possible to analogize the missing regions from surrounding pixels. The effectiveness of the proposed method was confirmed by image interpolation experiments for several images with occlusions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Iizuka, S., Simo-Serra, E., Ishikawa, H.: Globally and locally consistent image completion. ACM Trans. Graph. (ToG) 36(4), 1–14 (2017)

    Article  Google Scholar 

  2. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  3. Vaswani, A., et al.: Attention is all you need. arXiv:1706.03762. Submitted on 12 Jun 2017, Accessed 6 Dec 2017

  4. Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. In: ICLR 2021 Conference Paper1909, Submitted on 22 Oct 2020, Accessed 3 Jun 2021

    Google Scholar 

  5. Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. arXiv:2103.14030, Submitted on 25 Mar 2021, Accessed 17 Aug 2021

  6. Cao, H., et al.: Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation. arXiv:2105.05537, Submitted on 12 May 2021

  7. Krizhevsky, A.: Learning multiple layers of features from tiny image, Technical report (2009)

    Google Scholar 

  8. Calli, B., Walsman, A., Singh, A., Srinivasa, S., Abbeel, P., Dollar, A.M.: Benchmarking in manipulation research: the YCB object and model set and benchmarking protocols. IEEE Rob. Autom. Mag. (2015)

    Google Scholar 

  9. Calli, B., et al.: Yale-CMU-Berkeley dataset for robotic manipulation research. Int. J. Rob. Res. 36(3) (2017)

    Google Scholar 

  10. Calli, B., Singh, A., Walsman, A., Srinivasa, S., Abbeel, P., Dollar, A.M.: The YCB object and model set: towards common bench marks for manipulation research. In: Proceedings of the 2015 IEEE International Conference on Advanced Robotics (ICAR), Istanbul, Turkey (2015)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tomoya Matsuura .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Matsuura, T., Nakayama, T. (2023). Estimation of Occlusion Region Using Image Completion by Network Model Consisting of Transformer and U-Net. In: Ossowski, S., Sitek, P., Analide, C., Marreiros, G., Chamoso, P., Rodríguez, S. (eds) Distributed Computing and Artificial Intelligence, 20th International Conference. DCAI 2023. Lecture Notes in Networks and Systems, vol 740. Springer, Cham. https://doi.org/10.1007/978-3-031-38333-5_2

Download citation

Publish with us

Policies and ethics