Skip to main content

Is Sharing of Egocentric Video Giving Away Your Biometric Signature?

  • Conference paper
  • First Online:
Computer Vision – ECCV 2020 (ECCV 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12362))

Included in the following conference series:

Abstract

Easy availability of wearable egocentric cameras, and the sense of privacy propagated by the fact that the wearer is never seen in the captured videos, has led to a tremendous rise in public sharing of such videos. Unlike hand-held cameras, egocentric cameras are harnessed on the wearer’s head, which makes it possible to track the wearer’s head motion by observing optical flow in the egocentric videos. In this work, we create a novel kind of privacy attack by extracting the wearer’s gait profile, a well known biometric signature, from such optical flow in the egocentric videos. We demonstrate strong wearer recognition capabilities based on extracted gait features, an unprecedented and critical weakness completely absent in hand-held videos. We demonstrate the following attack scenarios: (1) In a closed-set scenario, we show that it is possible to recognize the wearer of an egocentric video with an accuracy of more than 92.5% on the benchmark video dataset. (2) In an open-set setting, when the system has not seen the camera wearer even once during the training, we show that it is still possible to identify that the two egocentric videos have been captured by the same wearer with an Equal Error Rate (EER) of less than 14.35%. (3) We show that it is possible to extract gait signature even if only sparse optical flow and no other scene information from egocentric video is available. We demonstrate the accuracy of more than 84% for wearer recognition with only global optical flow. (4) While the first person to first person matching does not give us access to the wearer’s face, we show that it is possible to match the extracted gait features against the one obtained from a third person view such as a surveillance camera looking at the wearer in a completely different background at a different time. In essence, our work indicates that sharing one’s egocentric video should be treated as giving away one’s biometric identity and recommend much more oversight before sharing of egocentric videos. The code, trained models, and the datasets and their annotations are available at https://egocentricbiometric.github.io/.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    To put the numbers in perspective, for the gait based recognition from third-person views, state of the art EER (on a different third-person dataset) is 4%.

References

  1. Huang, Y., Cai, M., Li, Z., Sato, Y.: Mutual context network for jointly estimating egocentric gaze and actions. arXiv preprint arXiv:1901.01874 (2019)

  2. Xu, J., Mukherjee, L., Li, Y., Warner, J., Rehg, J.M., Singh, V.: Gaze-enabled egocentric video summarization via constrained submodular maximization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2235–2244 (2015)

    Google Scholar 

  3. Kopf, J., Cohen, M.F., Szeliski, R.: First-person hyper-lapse videos. ACM Trans. Graph. (TOG) 33(4), 78 (2014)

    Article  Google Scholar 

  4. Ren, X., Gu, C.: Figure-ground segmentation improves handled object recognition in egocentric video. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 3137–3144. IEEE (2010)

    Google Scholar 

  5. Pirsiavash, H., Ramanan, D.: Detecting activities of daily living in first-person camera views. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2847–2854. IEEE (2012)

    Google Scholar 

  6. Kitani, K.M., Okabe, T., Sato, Y., Sugimoto, A.: Fast unsupervised ego-action learning for first-person sports videos. In: CVPR 2011, pp. 3241–3248. IEEE (2011)

    Google Scholar 

  7. Finocchiaro, J., Khan, A.U., Borji, A.: Egocentric height estimation. In: 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1142–1150. IEEE (2017)

    Google Scholar 

  8. Yagi, T., Mangalam, K., Yonetani, R., Sato, Y.: Future person localization in first-person videos. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7593–7602 (2018)

    Google Scholar 

  9. Hoshen, Y., Peleg, S.: An egocentric look at video photographer identity. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4284–4292 (2016)

    Google Scholar 

  10. Poleg, Y., Arora, C., Peleg, S.: Head motion signatures from egocentric videos. In: Cremers, D., Reid, I., Saito, H., Yang, M.-H. (eds.) ACCV 2014. LNCS, vol. 9005, pp. 315–329. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16811-1_21

    Chapter  Google Scholar 

  11. Johansson, G.: Visual perception of biological motion and a model for its analysis. Percept. Psychophys. 14(2), 201–211 (1973)

    Article  Google Scholar 

  12. Carter, J.N., Nixon, M.S.: Measuring gait signatures which are invariant to their trajectory. Meas. Control 32(9), 265–269 (1999)

    Article  Google Scholar 

  13. Kale, A., et al.: Identification of humans using gait. IEEE Trans. Image Process. 13(9), 1163–1173 (2004)

    Article  Google Scholar 

  14. Man, J., Bhanu, B.: Individual recognition using gait energy image. IEEE Trans. Pattern Anal. Mach. Intell. 28(2), 316–322 (2006)

    Article  Google Scholar 

  15. Hofmann, M., Rigoll, G.: Exploiting gradient histograms for gait-based person identification. In: 2013 20th IEEE International Conference on Image Processing (ICIP), pp. 4171–4175. IEEE (2013)

    Google Scholar 

  16. Thapar, D., Jaswal, G., Nigam, A., Arora, C.: Gait metric learning siamese network exploiting dual of spatio-temporal 3D-CNN intra and LSTM based inter gait-cycle-segment features. Pattern Recogn. Lett. 125, 646–653 (2019)

    Article  Google Scholar 

  17. Tao, W., Liu, T., Zheng, R., Feng, H.: Gait analysis using wearable sensors. Sensors 12(2), 2255–2283 (2012)

    Article  Google Scholar 

  18. Poleg, Y., Ephrat, A., Peleg, S., Arora, C.: Compact CNN for indexing egocentric videos. In: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1–9. IEEE (2016)

    Google Scholar 

  19. Jiang, H., Grauman, K.: Seeing invisible poses: estimating 3D body pose from egocentric video. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3501–3509. IEEE (2017)

    Google Scholar 

  20. Fan, C., et al.: Identifying first-person camera wearers in third-person videos. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)

    Google Scholar 

  21. Ardeshir, S., Borji, A.: Ego2Top: matching viewers in egocentric and top-view videos. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9909, pp. 253–268. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46454-1_16

    Chapter  Google Scholar 

  22. Hesch, J.A., Roumeliotis, S.I.: Consistency analysis and improvement for single-camera localization. In: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 15–22. IEEE (2012)

    Google Scholar 

  23. Murillo, A.C., Gutiérrez-Gómez, D., Rituerto, A., Puig, L., Guerrero, J.J.: Wearable omnidirectional vision system for personal localization and guidance. In: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 8–14. IEEE (2012)

    Google Scholar 

  24. Park, H.S., Jain, E., Sheikh, Y.: 3D social saliency from head-mounted cameras. In: Advances in Neural Information Processing Systems, pp. 422–430 (2012)

    Google Scholar 

  25. Soo Park, H., Jain, E., Sheikh, Y.: Predicting primary gaze behavior using social saliency fields. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3503–3510 (2013)

    Google Scholar 

  26. Farnebäck, G.: Two-frame motion estimation based on polynomial expansion. In: Bigun, J., Gustavsson, T. (eds.) SCIA 2003. LNCS, vol. 2749, pp. 363–370. Springer, Heidelberg (2003). https://doi.org/10.1007/3-540-45103-X_50

    Chapter  Google Scholar 

  27. Thapar, D., Jaswal, G., Nigam, A., Kanhangad, V.: PVSNet: palm vein authentication siamese network trained using triplet loss and adaptive hard mining by learning enforced domain specific features. arXiv preprint arXiv:1812.06271 (2018)

  28. Fathi, A., Hodgins, J.K., Rehg, J.M.: Social interactions: a first-person perspective. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1226–1233. IEEE (2012)

    Google Scholar 

  29. Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3D convolutional networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4489–4497 (2015)

    Google Scholar 

  30. Carreira, J., Zisserman, A.: Quo vadis, action recognition? A new model and the kinetics dataset. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6299–6308 (2017)

    Google Scholar 

  31. Hara, K., Kataoka, H., Satoh, Y.: Can spatiotemporal 3D CNNs retrace the history of 2D CNNs and ImageNet? In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6546–6555 (2018)

    Google Scholar 

  32. Ravikanth, C., Kumar, A.: Biometric authentication using finger-back surface. In: 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–6. IEEE (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daksh Thapar .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1447 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Thapar, D., Arora, C., Nigam, A. (2020). Is Sharing of Egocentric Video Giving Away Your Biometric Signature?. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds) Computer Vision – ECCV 2020. ECCV 2020. Lecture Notes in Computer Science(), vol 12362. Springer, Cham. https://doi.org/10.1007/978-3-030-58520-4_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-58520-4_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58519-8

  • Online ISBN: 978-3-030-58520-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics