Skip to main content

A Trimodal Dataset: RGB, Thermal, and Depth for Human Segmentation and Temporal Action Detection

  • Conference paper
  • First Online:
Pattern Recognition (DAGM GCPR 2023)

Abstract

Computer vision research and popular datasets are predominantly based on the RGB modality. However, traditional RGB datasets have limitations in lighting conditions and raise privacy concerns. Integrating or substituting with thermal and depth data offers a more robust and privacy-preserving alternative. We present TRISTAR (https://zenodo.org/record/7996570, https://github.com/Stippler/tristar), a public TRImodal Segmentation and acTion ARchive comprising registered sequences of RGB, depth, and thermal data. The dataset encompasses 10 unique environments, 18 camera angles, 101 shots, and 15,618 frames which include human masks for semantic segmentation and dense labels for temporal action detection and scene understanding. We discuss the system setup, including sensor configuration and calibration, as well as the process of generating ground truth annotations. On top, we conduct a quality analysis of our proposed dataset and provide benchmark models as reference points for human segmentation and action detection. By employing only modalities of thermal and depth, these models yield improvements in both human segmentation and action detection.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Brenner, M., Reyes, N.H., Susnjak, T., Barczak, A.L.: RGB-D and thermal sensor fusion: a systematic literature review. arXiv preprint arXiv:2305.11427 (2023)

  2. Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 833–851. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_49

    Chapter  Google Scholar 

  3. Cordts, M., et al.: The cityscapes dataset for semantic urban scene understanding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3213–3223 (2016)

    Google Scholar 

  4. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), vol. 1, pp. 886–893. IEEE (2005)

    Google Scholar 

  5. Davis, J., Keck, M.: A two-stage approach to person detection in thermal imagery. In: Proceeding of Workshop on Applications of Computer Vision (WACV) (2005)

    Google Scholar 

  6. Donahue, J., et al.: Long-term recurrent convolutional networks for visual recognition and description. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2625–2634 (2015)

    Google Scholar 

  7. Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)

  8. Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The PASCAL visual object classes (VOC) challenge. Int. J. Comput. Vision 88, 303–338 (2010)

    Article  Google Scholar 

  9. Gao, C., et al.: Infar dataset: infrared action recognition at different times. Neurocomputing 212, 36–47 (2016)

    Article  Google Scholar 

  10. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The KITTI vision benchmark suite. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3354–3361. IEEE (2012)

    Google Scholar 

  11. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969 (2017)

    Google Scholar 

  12. Heitzinger, T., Kampel, M.: A foundation for 3d human behavior detection in privacy-sensitive domains. In: 32nd British Machine Vision Conference 2021, BMVC 2021, 22–25 November 2021, p. 305. BMVA Press (2021). https://www.bmvc2021-virtualconference.com/assets/papers/1254.pdf

  13. Heitzinger, T., Kampel, M.: IPT: a dataset for identity preserved tracking in closed domains. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 8228–8234. IEEE (2021)

    Google Scholar 

  14. Ji, S., Xu, W., Yang, M., Yu, K.: 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 221–231 (2012)

    Article  Google Scholar 

  15. Jocher, G., Chaurasia, A., Qiu, J.: YOLO by Ultralytics, January 2023. https://github.com/ultralytics/ultralytics

  16. Kirillov, A., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023)

  17. Kniaz, V.V., Knyaz, V.A., Hladůvka, J., Kropatsch, W.G., Mizginov, V.: ThermalGAN: multimodal color-to-thermal image translation for person re-identification in multispectral dataset. In: Leal-Taixé, L., Roth, S. (eds.) ECCV 2018. LNCS, vol. 11134, pp. 606–624. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11024-6_46

    Chapter  Google Scholar 

  18. Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

    Chapter  Google Scholar 

  19. Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10012–10022 (2021)

    Google Scholar 

  20. Liu, Z., et al.: Video swin transformer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3202–3211 (2022)

    Google Scholar 

  21. Miezianko, R.: Terravic research infrared database. In: IEEE OTCBVS WS Series Bench (2005)

    Google Scholar 

  22. Palmero, C., Clapés, A., Bahnsen, C., Møgelmose, A., Moeslund, T.B., Escalera, S.: Multi-modal RGB-depth-thermal human body segmentation. Int. J. Comput. Vision 118, 217–239 (2016)

    Article  MathSciNet  Google Scholar 

  23. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems 28 (2015)

    Google Scholar 

  24. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  25. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vision 115, 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  26. Shahroudy, A., Liu, J., Ng, T.T., Wang, G.: NTU RGB+ D: a large scale dataset for 3d human activity analysis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1010–1019 (2016)

    Google Scholar 

  27. Shivakumar, S.S., Rodrigues, N., Zhou, A., Miller, I.D., Kumar, V., Taylor, C.J.: PST900: RGB-thermal calibration, dataset and segmentation network. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 9441–9447. IEEE (2020)

    Google Scholar 

  28. Sigurdsson, G.A., Divvala, S., Farhadi, A., Gupta, A.: Asynchronous temporal fields for action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 585–594 (2017)

    Google Scholar 

  29. Sigurdsson, G.A., Varol, G., Wang, X., Farhadi, A., Laptev, I., Gupta, A.: Hollywood in homes: crowdsourcing data collection for activity understanding. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 510–526. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_31

    Chapter  Google Scholar 

  30. Silberman, N., Hoiem, D., Kohli, P., Fergus, R.: Indoor segmentation and support inference from RGBD images. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7576, pp. 746–760. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33715-4_54

    Chapter  Google Scholar 

  31. Strohmayer, J., Kampel, M.: A compact tri-modal camera unit for RGBDT vision. In: 2022 the 5th International Conference on Machine Vision and Applications (ICMVA), pp. 34–42 (2022)

    Google Scholar 

  32. Tkachenko, M., Malyuk, M., Holmanyuk, A., Liubimov, N.: Label studio: data labeling software (2020–2022). https://github.com/heartexlabs/label-studio

  33. Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3d convolutional networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4489–4497 (2015)

    Google Scholar 

  34. Wang, C.Y., Bochkovskiy, A., Liao, H.Y.M.: YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7464–7475 (2023)

    Google Scholar 

  35. Zhang, H., Goodfellow, I., Metaxas, D., Odena, A.: Self-attention generative adversarial networks. In: International Conference on Machine Learning, pp. 7354–7363. PMLR (2019)

    Google Scholar 

  36. Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., Torralba, A.: Scene parsing through ADE20K dataset. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 633–641 (2017)

    Google Scholar 

Download references

Acknowledgments

This work was partly supported by the Austrian Research Promotion Agency (FFG) under the Grant Agreement No. 879744.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christian Stippel .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Stippel, C., Heitzinger, T., Kampel, M. (2024). A Trimodal Dataset: RGB, Thermal, and Depth for Human Segmentation and Temporal Action Detection. In: Köthe, U., Rother, C. (eds) Pattern Recognition. DAGM GCPR 2023. Lecture Notes in Computer Science, vol 14264. Springer, Cham. https://doi.org/10.1007/978-3-031-54605-1_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-54605-1_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-54604-4

  • Online ISBN: 978-3-031-54605-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics