Skip to main content
Log in

What is happening now? Detection of activities of daily living from simple visual features

  • Original Article
  • Published:
Personal and Ubiquitous Computing Aims and scope Submit manuscript

Abstract

We propose and investigate a paradigm for activity recognition, distinguishing the “on-going activity” recognition task (OGA) from that addressing “complete activities” (CA). The former starts from a time interval and aims to discover which activities are going on inside it. The latter, in turn, focuses on terminated activities and amounts to taking an external perspective on activities. We argue that this distinction is quite natural and the OGA task has a number of interesting properties; e.g., the possibility of reconstructing complete activities in terms of on-going ones, the avoidance of the thorny issue of activity segmentation, and a straightforward accommodation of complex activities, etc. Moreover, some plausible properties of the OGA task are discussed and then investigated in a classification study, addressing: the dependence of classification performance on the duration of time windows and its relationship with actional types (homogeneous vs. non-homogeneous activities), and on the assortments of features used. Three types of visual features are exploited, obtained from a data set that tries to balance the pros and cons of laboratory-based and naturalistic ones. The results provide partial confirmation to the hypothesis and point to relevant open issues for future work.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. For a more detailed introduction to this topic, see [5].

  2. Of course, nothing prevents a machine to address the OGA and the CA task simultaneously. This is not what humans do, at least as expressed in their talks.

  3. Our homogeneous versus non-homogeneous events/activities distinction is similar, but not identical, to that in use in linguistic semantics (see, e.g., [15]), since here we focus more on observable properties than on linguistic ones.

References

  1. Katz S, Ford AB, Moskowitz RW, Jackson BA, Jaffe MW (1963) Studies of illness in the aged. the index of ADL: a standardized measure of biological and psychological function. J Am Med Assoc 185:914–919

    Google Scholar 

  2. Katz S (1983) Assessing self-maintenance: activities of daily living, mobility, and instrumental activities of daily living. J Am Geriatr Soc 31(12):712–726

    Google Scholar 

  3. Business Week, May 18, (2006)

  4. Intille SS, Larson K, Tapia EM, Beaudin JS, Kaushik P, Nawyn J, Rockinson R (2006) Using a live-in laboratory for ubiquitous computing research. Pervasive Computing. In: Fishkin KP, Schiele B, Nixon P, Quigley A (eds) PERVASIVE 2006, Lect Notes in Comput Sci. vol 3968. Springer, Berlin/Heidelberg, pp 349–365

  5. Pianesi F, Varzi A (2000) Events and event talk: an introduction. In: Higginbotham J, Pianesi F, Varzi A (eds) Speaking of events. New York, Oxford University Press, pp 3–47

    Google Scholar 

  6. Hamid R, Maddi S, Johnson A, Bobick A, Essa I, Isbell C (2009) A novel sequence representation for unsupervised analysis of human activities. Artific Intell J 173(14):1221–1244

    Article  MathSciNet  Google Scholar 

  7. Minnen D, Starner T, Essa I, and Isbell C (2006) Discovering characteristic actions from on-body sensors data. In: International symposium on wearable computing (ISWC), Montreux, CH

  8. Ferrahi K, Gatica-Perez D (2009) Learning and predicting multimodal daily life patterns from cell phones. In: Proceedings of international conference on multimodal interfaces (ICMI-MLLI), Cambridge, November 2009

  9. Pentney W, Philipose M, Bilmes J, Kautz H (2007) Learning large scale common sense models of everyday life. In: Proceedings of AAAI 2007, Vancouver BC

  10. Munguia-Tapia E, Choudhury T and Philipose M (2006) Building reliable activity models using hierarchical shrinkage and mined ontology. In: Proceedings of Pervasive 2006, Dublin

  11. Philipose M, Fishkin KP, Perkowitz M, Patterson DJ, Fox D, Kautz H, Hähnel D (2004) Inferring activities from interaction with objects. IEEE Pervasive Computing 3(4):50–57

    Article  Google Scholar 

  12. Logan B, Healey J, Philipose M, Munguia Tapia E, and Intille S (2007) A long-term evaluation of sensing modalities for activity recognition. In Proceedings of the international conference on ubiquitous computing, vol LNCS 4717. Berlin Heidelberg: Springer, pp 483–500

  13. Stikic M, Huynh T, Van Laerhoven K, Schiele B (2008) ADL recognition based on the combination of RFID and accelerometer sensing. Pervasive Health 2008. Second international conference on volume, issue, Jan. 30 2008–Feb. 1, pp 258–263

  14. Munguia Tapia E, Intille S, Haskell W, Larson K, Wright J, King A, and Friedman R (2007) Real-time recognition of physical activities and their intensities using wireless accelerometers and a heart rate monitor. In: Proceedings of the international symposium on wearable computers, IEEE Press

  15. Albinali F, Goodwin M.F and Intille S (2009) Recognizing stereotypical motor movements in the laboratory and classroom: a case study with children on the autism spectrum. In: Proceedings of the international conference on ubiquitous computing

  16. Ogawa M, Ochiai S, Shoji K, Nishihara M and Togawa T (2000) An attempt of monitoring daily activities at home. Presented at world congress on medical physics and biomedical engineering, Chicago, Illinois

  17. Munguia Tapia E, Intille S, Lopez L, and Larson K (2006) The design of a portable kit of wireless sensors for naturalistic data collection. In: Proceedings of PERVASIVE 2006, Berlin-Heidelberg, eds. Dublin, Ireland, Springer

  18. Lester J, Chohudhury T, and Boriello G (2006) A practical approach to recognizing physical activities. In: Proceedings of PERVASIVE 2006, Berlin-Heidelberg, eds. Dublin, Ireland, Springer

  19. Oliver N, Horvitz E, and Garg A (2002) Layered representations for human activity recognition. In: 4th international conference on multimodal interfaces (ICMI’02), pp 3–8

  20. McCowan I, Gatica-Perez D, Bengio S, Lathoud G, Barnard M, Zhang D (2004) Automatic analysis of multimodal group actions in meetings. In: IEEE transactions on pattern analysis and machine intelligence

  21. Wojek C, Nickel K, Stiefelhagen R (2006) Activity recognition and room-level tracking in an office environment. In: IEEE international conference on multisensor fusion and integration for intelligent systems, Sept. 3–6 2006, Heidelberg, Germany

  22. Mihailidis A, Carmichael B, Boger J (2004) The use of computer vision in an intelligent environment to support aging in place, safety, and independence in the home. IEEE Trans Inf Technol Biomed 8(3):1–11 (Special issue on pervasive healthcare)

    Article  Google Scholar 

  23. Wu J, Osuntogun A, Chohudhury T, Philipose M, Rehg J (2007) A scalable approach to activity recognition based on object use. In: Proceedings of international conference on computer vision (ICCV), Rio de Janeiro, Brasil

  24. Park S, Kautz H (2008) Hierarchical recognition of activities of daily living using multi-scale, multi-perspective vision and RFID. In: Proceedings 4th IET international conference on intelligent environments, Seattle, WA

  25. Uhrikova Z, Nugent CD, Hlavac V (2008) The use of computer vision techniques to augment home based sensorised environments. In: Proceedings of international conference of the IEEE engineering in medicine and biology society

  26. Brdiczka O, Langet M, Maisonnasse J, Crowley J (2008) Detecting human behavior models from multimodal observation in a smart home. IEEE Trans Autom Sci Eng 5(4):588–597

    Google Scholar 

  27. Cappelletti A, Lepri B, Mana N, Pianesi F and Zancanaro M (2008) A multimodal data collection of daily activities in a real instrumented apartment. In: Proceedings of LREC 2008 workshop on multimodal corpora: from models of natural interaction to systems and applications, held at the 6th international conference for language resources and evaluation, Marrakech, Morocco

  28. Lanz O, Chippendale P, Brunelli R (2007) An appearance-based particle filter for visual tracking in smart rooms multimodal technologies for perception of humans. Springer LNCS 4625:57–69

    Google Scholar 

  29. Chippendale P (2006) towards automatic body language annotation. In: Proceedings of the 7th international conference on automatic face and gesture recognition–FG2006 (IEEE), pp 487–492, Southampton, UK

  30. Burges CJC (1998) A tutorial on support vector machines for pattern recognition. Data Min Knowl Disc 2(2):121–167

    Article  Google Scholar 

  31. Hsu CW, Lin CJ (2002) A comparison of methods for multi-class support vector machines. IEEE Trans Neural Netw 13:415–425

    Article  Google Scholar 

  32. Kressel U (1999) Pairwise classification and support vector machines. In: Scholkopf B, Burges CJC, Smola AJ (eds) Advances in kernel methods–support vector learning. MIT Press, Cambridge

    Google Scholar 

  33. Agresti A (2002) Categorical data analysis. John Wiley & Sons, Hoboken

    Book  MATH  Google Scholar 

  34. Pearl J (1988) Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann, San Francisco

    Google Scholar 

  35. Cooper G, Herskovits E (1992) A Bayesian method for the induction of probabilistic networks from data. Mach Learn 9:309–347

    MATH  Google Scholar 

  36. Getoor L, Taskar B (2007) Introduction to statistical relational learning. MIT Press, Cambridge

  37. Richardson M, Domingos P (2006) Markov logic networks. Mach Learn 62:107–136

    Article  Google Scholar 

Download references

Acknowledgments

This work has been partially supported by the EU-funded NETCARITY project. The authors thank all participants in the data collection and the colleagues of TeV research group, who helped in the data collection setting and visual features extraction.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bruno Lepri.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lepri, B., Mana, N., Cappelletti, A. et al. What is happening now? Detection of activities of daily living from simple visual features. Pers Ubiquit Comput 14, 749–766 (2010). https://doi.org/10.1007/s00779-010-0290-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00779-010-0290-z

Keywords

Navigation