Skip to main content

Head Motion Signatures from Egocentric Videos

  • Conference paper
  • First Online:
Computer Vision -- ACCV 2014 (ACCV 2014)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 9005))

Included in the following conference series:

Abstract

The proliferation of surveillance cameras has created new privacy concerns as people are captured daily without explicit consent, and the video is kept in databases for a very long time. With the increasing popularity of wearable cameras like Google Glass the problem is set to increase substantially. An important computer vision task is to enable a person (“subject”) to query the video database (“observer”) whether he/she has been captured on the video. Following a positive answer, the subject may request a copy of the video, or ask to be “forgotten” by erasing this video from the database. Two properties such queries should possess are: (i) The query should not reveal more information about the subject, further breaching his privacy. (ii) The query should certify that the subject is indeed the captured person before sending him the video or erasing it. This paper presents a possible solution when the subject has a head mounted camera, e.g. Google Glass. We propose to create a unique signature, based on pattern of head motion, that could identify that the subject is indeed the person seen in a video. Unlike traditional biometric methods (face, gait recognition etc.), the proposed signature is temporally volatile, and can identify the subject only at a particular time. It is of no use for any other place or time.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We note that inertial devices (as used by [11]) could have been used for computing head activity signatures as well. However, our experiments with such inertial devices have yielded a very noisy signal which is not useful for our case. In any case, the requirement of additional hardware restricts the potential application areas, while using image based solution widens the scope of application.

  2. 2.

    The observed displacement in observer’s signature could be opposite or in phase with subjects’s signature depending upon whether the subject is seen from front or back by the observer.

  3. 3.

    Any unequal division of the total score requirement would be more difficult to meet.

  4. 4.

    In our implementation, the dimension of the vectors (length of the signature) is usually more than \(200\) frames (corresponding to \(3\)\(4\) s of video at \(60\) frames per second). This is a sufficiently large dimension for the proposed probabilistic analysis.

References

  1. Shiraga, K., Trung, N.T., Mitsugami, I., Mukaigawa, Y., Yagi, Y.: Gait-based person authentication by wearable cameras. In: International Conference on Networked Sensing Systems, pp. 1–7 (2012)

    Google Scholar 

  2. Yao, A.C.C.: How to generate and exchange secrets. In: FOCS, pp. 162–167 (1986)

    Google Scholar 

  3. Avidan, S., Butman, M.: Blind vision. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3953, pp. 1–13. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  4. Upmanyu, M., Namboodiri, A., Srinathan, K., Jawahar, C.: Efficient privacy preserving video surveillance. In: ICCV, pp. 1639–1646 (2009)

    Google Scholar 

  5. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, New York (2003)

    Google Scholar 

  6. Raudies, F., Neumann, H.: A review and evaluation of methods estimating ego-motion. CVIU 116, 606–633 (2012)

    Google Scholar 

  7. Castle, R.O., Klein, G., Murray, D.W.: Video-rate localization in multiple maps for wearable augmented reality. In: IEEE ISWC (2008)

    Google Scholar 

  8. Wu, C.: VisualSFM: A visual structure from motion system. http://ccwu.me/vsfm/

  9. VISCODA: Voodoo camera tracker. http://www.digilab.uni-hannover.de/

  10. Poleg, Y., Arora, C., Peleg, S.: Temporal segmentation of egocentric videos. In: CVPR (2014)

    Google Scholar 

  11. Spriggs, E., Torre, F.D.L., Hebert, M.: Temporal segmentation and activity classification from first-person sensing. In: CVPRW (2009)

    Google Scholar 

  12. Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: IJCAI, pp. 674–679 (1981)

    Google Scholar 

  13. Baker, S., Scharstein, D., Lewis, J.P., Roth, S., Black, M.J., Szeliski, R.: A database and evaluation methodology for optical flow. IJCV 92, 1–31 (2011)

    Article  Google Scholar 

  14. Enzweiler, M., Gavrila, D.: Monocular pedestrian detection: survey and experiments. TPAMI 31, 2179–2195 (2009)

    Article  Google Scholar 

  15. Ramanan, D.: Part-based models for finding people and estimating their pose. In: Moeslund, T.B., Hilton, A., Krúger, V., Sigal, L. (eds.) Visual Analysis of Humans, pp. 199–223. Springer, London (2011)

    Chapter  Google Scholar 

  16. Kidron, E., Schechner, Y.Y., Elad, M.: Pixels that sound. In: CVPR, pp. 88–95 (2005)

    Google Scholar 

  17. Schmid Jr., J.: The relationship between the coefficient of correlation and the angle included between regression lines. J. Educ. Res. 41, 311–313 (1947)

    Article  Google Scholar 

  18. Wikipedia: Pearson product-moment correlation coefficient. http://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient

  19. Fathi, A., Hodgins, J.K., Rehg, J.M.: Social interactions: a first-person perspective. In: CVPR (2012)

    Google Scholar 

  20. Bradski, G.: Opencv ver 2.4.3. (2013)

    Google Scholar 

  21. Ramanan, D., Zhu, X.: Face detection, pose estimation, and landmark localization in the wild. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2879–2886 (2012)

    Google Scholar 

  22. Ho, H.T., Chellappa, R.: Automatic head pose estimation using randomly projected dense sift descriptors. In: ICIP, pp. 153–156 (2012)

    Google Scholar 

Download references

Acknowledgement

This research was supported by Intel ICRC-CI, by Israel Ministry of Science, and by Israel Science Foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shmuel Peleg .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Poleg, Y., Arora, C., Peleg, S. (2015). Head Motion Signatures from Egocentric Videos. In: Cremers, D., Reid, I., Saito, H., Yang, MH. (eds) Computer Vision -- ACCV 2014. ACCV 2014. Lecture Notes in Computer Science(), vol 9005. Springer, Cham. https://doi.org/10.1007/978-3-319-16811-1_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-16811-1_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-16810-4

  • Online ISBN: 978-3-319-16811-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics