Abstract
In contrast to traditional transformer blocks using a set of pre-defined parameters as positional embeddings, we propose the input-aware positional embedding (IPE) which is dynamically generated according to the input feature. We implement this idea by designing the IPE transformer, which enjoys stronger generalization powers across arbitrary input sizes. To verify its effectiveness, we integrate the newly-designed transformer into NLSPN and GuideNet, two remarkable depth completion networks. The experimental result on a large scale outdoor depth completion dataset shows that the proposed transformer can effectively model long-range dependency with a manageable memory overhead.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: The KITTI dataset. Int. J. Robot. Res. 32(11), 1231–1237 (2013)
Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)
Hu, M., Wang, S., Li, B., Ning, S., Fan, L., Gong, X.: Penet: towards precise and efficient image guided depth completion. arXiv preprint arXiv:2103.00783 (2021)
Liu, L., et al.: FCFR-net: feature fusion based coarse-to-fine residual learning for monocular depth completion. arXiv preprint arXiv:2012.08270 (2020)
Tang, J., Tian, F.P., Feng, W., Li, J., Tan, P.: Learning guided convolutional network for depth completion. IEEE Trans. Image Process. 30, 1116–1129 (2020)
Zhao, S., Gong, M., Fu, H., Tao, D.: Adaptive context-aware multi-modal network for depth completion. arXiv preprint arXiv:2008.10833 (2020)
Li, A., Yuan, Z., Ling, Y., Chi, W., Zhang, C.: A multi-scale guided cascade hourglass network for depth completion. In: The IEEE Winter Conference on Applications of Computer Vision, pp. 32–40 (2020)
Liu, Z., et al.: Swin transformer: Hierarchical vision transformer using shifted windows. arXiv preprint arXiv:2103.14030 (2021)
Wang, H., et al.: Axial-DeepLab: stand-alone axial-attention for panoptic segmentation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12349, pp. 108–126. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58548-8_7
Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. arXiv preprint arXiv:2103.13413 (2021)
Li, S., Sui, X., Luo, X., Xu, X., Liu, Y., Goh, R.S.M.: Medical image segmentation using squeeze-and-expansion transformers. arXiv preprint arXiv:2105.09511 (2021)
Wang, Y., et al.: End-to-end video instance segmentation with transformers. arXiv preprint arXiv:2011.14503 (2020)
Wu, B., et al.: Visual transformers: Token-based image representation and processing for computer vision. arXiv preprint arXiv:2006.03677 (2020)
Newell, A., Yang, K., Deng, J.: Stacked Hourglass Networks for Human Pose Estimation. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) Computer Vision, ECCV 2016. ECCV 2016. LNCS, vol. 9912, pp. 483–499. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46484-8_29
Liu, S., De Mello, S., Gu, J., Zhong, G., Yang, M.H., Kautz, J.: Learning affinity via spatial propagation networks. In: Advances in Neural Information Processing Systems, pp. 1520–1530 (2017)
Cheng, X., Wang, P., Yang, R.: Learning depth with convolutional spatial propagation network. arXiv preprint arXiv:1810.02695 (2018)
Cheng, X., Wang, P., Guan, C., Yang, R.: Cspn++: Learning context and resource aware convolutional spatial propagation networks for depth completion. In: AAAI, pp. 10615–10622 (2020)
Park, J., Joo, K., Hu, Z., Liu, C.K., Kweon, I.S.: Non-local spatial propagation network for depth completion. arXiv preprint arXiv:2007.10042 3(8) (2020)
Dai, J., et al.: Deformable convolutional networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 764–773 (2017)
Zhu, X., Hu, H., Lin, S., Dai, J.: Deformable convnets v2: more deformable, better results. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9308–9316 (2019)
Vaswani, A., et al.: Attention is all you need. Adv. Neural Inf. Proces. Sys. 30, 5998–6008 (2017)
Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)
Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. arXiv preprint arXiv:2005.12872 (2020)
Parmar, N., Ramachandran, P., Vaswani, A., Bello, I., Levskaya, A., Shlens, J.: Stand-alone self-attention in vision models. arXiv preprint arXiv:1906.05909 (2019)
Zheng, S., et al.: Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. arXiv preprint arXiv:2012.15840 (2020)
Acknowledgement
We thank all editors and reviewers for their helpful suggestions. This work is supported by National Natural Science Foundation of China (No.61906031), and Fundamental Research Funds for Central Universities (No. DUT21RC(3)025).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Li, B. et al. (2021). IPE Transformer for Depth Completion with Input-Aware Positional Embeddings. In: Ma, H., et al. Pattern Recognition and Computer Vision. PRCV 2021. Lecture Notes in Computer Science(), vol 13022. Springer, Cham. https://doi.org/10.1007/978-3-030-88013-2_22
Download citation
DOI: https://doi.org/10.1007/978-3-030-88013-2_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-88012-5
Online ISBN: 978-3-030-88013-2
eBook Packages: Computer ScienceComputer Science (R0)