Skip to main content

Mutual Use of Semantics and Geometry for CNN-Based Object Localization in ToF Images

  • Conference paper
  • First Online:
Pattern Recognition. ICPR International Workshops and Challenges (ICPR 2021)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12662))

Included in the following conference series:

  • 2401 Accesses

Abstract

We propose a novel approach to localize a 3D object from the intensity and depth information images provided by a Time-of-Flight (ToF) sensor. Our method builds on two convolutional neural networks (CNNs). The first one uses raw depth and intensity images as input, to segment the floor pixels, from which the extrinsic parameters of the camera are estimated. The second CNN is in charge of segmenting the object-of-interest so as to align its point cloud with a reference model. As a main innovation, the object segmentation exploits the calibration estimated from the prediction of the first CNN to represent the geometric depth information in a coordinate system that is attached to the ground, and is thus independent of the camera elevation. In practice, both the height of pixels with respect to the ground, and the orientation of normals to the point cloud are provided as input to the second CNN.

Our experiments, dealing with bed localization in nursing homes and hospitals, demonstrate that our proposed floor-aware approach improves segmentation and localization accuracy by a significant margin compared to a conventional CNN architecture, ignoring calibration and height maps, but also compared to PointNet++.

A. Vanderschueren and V. Joos—Contributed equally to the paper.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    The tool developed for annotation is available at https://github.com/ispgroupucl/tofLabelImg.

References

  1. Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 833–851. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_49

    Chapter  Google Scholar 

  2. Fooladgar, F., Kasaei, S.: A survey on indoor RGB-D semantic segmentation: from hand-crafted features to deep convolutional neural networks. Multi. Tools Appl. 79(7), 4499–4524 (2019). https://doi.org/10.1007/s11042-019-7684-3

    Article  Google Scholar 

  3. Gupta, S., Girshick, R., Arbeláez, P., Malik, J.: Learning rich features from RGB-D images for object detection and segmentation. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8695, pp. 345–360. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10584-0_23

    Chapter  Google Scholar 

  4. Hazirbas, C., Ma, L., Domokos, C., Cremers, D.: FuseNet: incorporating depth into semantic segmentation via fusion-based CNN architecture. In: Lai, S.-H., Lepetit, V., Nishino, K., Sato, Y. (eds.) ACCV 2016. LNCS, vol. 10111, pp. 213–228. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-54181-5_14

    Chapter  Google Scholar 

  5. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. CVPR (2016). https://doi.org/10.3389/fpsyg.2013.00124

    Article  Google Scholar 

  6. Holz, D., Schnabel, R., Droeschel, D., Stückler, J., Behnke, S.: Towards semantic scene analysis with time-of-flight cameras. In: Ruiz-del-Solar, J., Chown, E., Plöger, P.G. (eds.) RoboCup 2010. LNCS (LNAI), vol. 6556, pp. 121–132. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-20217-9_11

    Chapter  Google Scholar 

  7. Hou, J., Dai, A., NieBner, M.: 3D-SIS: 3D semantic instance segmentation of RGB-D scans. In: CVPR, pp. 4416–4425 (2019). doi: https://doi.org/10.1109/cvpr.2019.00455

  8. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: CVPR, pp. 7132–7141 (2018). https://doi.org/10.1109/CVPR.2018.00745

  9. Hu, X., Yang, K., Fei, L., Wang, K.: ACNET: attention based network to exploit complementary features for rgbd semantic segmentation. In: ICIP, pp. 1440–1444 (2019). https://doi.org/10.1109/ICIP.2019.8803025

  10. Jia, L., Radke, R.J.: Using time-of-flight measurements for privacy-preserving tracking in a smart room. IEEE Trans. Ind. Inform. 10(1), 689–696 (2014). https://doi.org/10.1109/TII.2013.2251892

  11. Jiang, J., Zheng, L., Luo, F., Zhang, Z.: RedNet: residual encoder-decoder network for indoor RGB-D semantic segmentation. arXiv preprint arXiv:1806.01054, pp. 1–14 (2018). http://arxiv.org/abs/1806.01054

  12. Landrieu, L., Simonovsky, M.: Large-scale point cloud semantic segmentation with superpoint graphs. In: CVPR, pp. 4558–4567 (2018). https://doi.org/10.1134/1.559035, http://arxiv.org/abs/1711.09869

  13. Li, G., Müller, M., Thabet, A., Ghanem, B.: DeepGCNs: an GCNs go as deep as CNNs?. In: ICCV (2019). http://arxiv.org/abs/1904.03751

  14. Liang, Z., Yang, M., Deng, L., Wang, C., Wang, B.: Hierarchical depthwise graph convolutional neural network for 3D semantic segmentation of point clouds. In: Proceedings - IEEE International Conference on Robotics and Automation, May 2019, pp. 8152–8158 (2019). https://doi.org/10.1109/ICRA.2019.8794052

  15. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. ICLR (2019)

    Google Scholar 

  16. Maddalena, L., Petrosino, A.: Background subtraction for moving object detection in RGBD data: a survey. J. Imaging, 4(5) (2018). https://doi.org/10.3390/jimaging4050071

  17. Milletari, F., Navab, N., Ahmadi, S.A.: V-Net: fully convolutional neural networks for volumetric medical image segmentation. In: Proceedings - 2016 4th International Conference on 3D Vision, 3DV 2016, pp. 565–571 (2016). https://doi.org/10.1109/3DV.2016.79

  18. Paszke, A., et al.: PyTorch: An Imperative Style, High-Performance Deep Learning Library. NeurIPS (2019). http://arxiv.org/abs/1912.01703

  19. Qi, C.R., Litany, O., He, K., Guibas, L.J.: Deep hough voting for 3D object detection in point clouds. In: ICCV (2019). http://arxiv.org/abs/1904.09664

  20. Qi, C.R., Liu, W., Wu, C., Su, H., Guibas, L.J.: Frustum PointNets for 3D object detection from RGB-D data. In: CVPR (2018). 10.1109/CVPR.2018.00102, http://arxiv.org/abs/1711.08488

  21. Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV), pp. 601–610 (2016). https://doi.org/10.1109/3DV.2016.68

  22. Qi, C.R., Yi, L., Su, H., Guibas, L.J.: PointNet++: deep hierarchical feature learning on point sets in a metric space. Adv. Neural Inf. Process. Syst. (2017). https://doi.org/10.3109/13816819409056905

    Article  Google Scholar 

  23. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  24. Shi, S., Wang, X., Li, H.: PointRCNN: 3D object proposal generation and detection from point cloud. In: CVPR, pp. 770–779. IEEE (6 2019). https://doi.org/10.1109/CVPR.2019.00086

  25. Simon, C., Meessen, J., De Vleeschouwer, C.: Visual event recognition using decision trees. Multi. Tools Appl. 50(1), 95–121 (2010). https://doi.org/10.1007/s11042-009-0364-y

  26. Song, S., Yu, F., Zeng, A., Chang, A.X., Savva, M., Funkhouser, T.: Semantic scene completion from a single depth image. In: CVPR (2017). https://doi.org/10.1109/CVPR.2017.28http://arxiv.org/abs/1611.08974

  27. Wijmans, E.: PointNet++ Pytorch Implementation. Github (2018). https://github.com/erikwijmans/Pointnet2_PyTorch

  28. Yang, L., Ren, Y., Zhang, W.: 3D depth image analysis for indoor fall detection of elderly people. Digit. Commun. Netw. 2(1), 24–34 (2016). https://doi.org/10.1016/j.dcan.2015.12.001

  29. Yu, C., et al.: BiSeNet: bilateral segmentation network for real-time semantic segmentation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11217, pp. 334–349. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01261-8_20

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Antoine Vanderschueren or Victor Joos .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Vanderschueren, A., Joos, V., De Vleeschouwer, C. (2021). Mutual Use of Semantics and Geometry for CNN-Based Object Localization in ToF Images. In: Del Bimbo, A., et al. Pattern Recognition. ICPR International Workshops and Challenges. ICPR 2021. Lecture Notes in Computer Science(), vol 12662. Springer, Cham. https://doi.org/10.1007/978-3-030-68790-8_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-68790-8_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-68789-2

  • Online ISBN: 978-3-030-68790-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics