Abstract
In the next forty years, the number of people living with dementia is expected to triple. In the last stages, people affected by this disease become dependent. This hinders the autonomy of the patient and has a huge social impact in time, money and effort. Given this scenario, we propose an ubiquitous system capable of recognizing daily specific actions. The system fuses and synchronizes data obtained from two complementary modalities - ambient and egocentric. The ambient approach consists in a fixed RGB-Depth camera for user and object recognition and user-object interaction, whereas the egocentric point of view is given by a personal area network (PAN) formed by a few wearable sensors and a smartphone, used for gesture recognition. The system processes multi-modal data in real-time, performing paralleled task recognition and modality synchronization, showing high performance recognizing subjects, objects, and interactions, showing its reliability to be applied in real case scenarios.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Note that due to the enrichment given by range data a single Gaussian model suffices for modeling background. Very small improvements have been observed using Gaussian Mixture Models in the studied environments.
- 2.
The extension to a small set of gestures of interest can be easily achieved without a significant loss in performance [11].
- 3.
Notice that the drinking action is not detected because the system is sensitive to the hand orientation.
References
Stone, E.E., Skubic, M.: Evaluation of an inexpensive depth camera for passive in-home fall risk, assessment. In: Pervasive Computing Technologies for Healthcare (PervasiveHealth), pp. 71–11 (2011)
Zhang, C., Tian, Y., Capezuti, E.: Privacy preserving automatic fall detection for elderly using RGBD cameras. In: Miesenberger, K., Karshmer, A., Penaz, P., Zagler, W. (eds.) ICCHP 2012, Part I. LNCS, vol. 7382, pp. 625–633. Springer, Heidelberg (2012)
Banerjee, T., Keller, J., Skubic, J., Stone, E. E.: Day or nigh activity recognition from video using fuzzy clustering, techniques. IEEE Transactions Fuzzy Systems, pp. 1–1 (2013)
Shotton, J., Fitzgibbon, A., Cook, M., et al.: Real-time human pose recognition in parts from single depth images. In: CVPR, pp. 1297–1304 (2011)
Escalera, S.: Articulated motion and deformable objects 2012. In: Human Behavior Analysis From Depth Maps, pp. 282–292 (2012)
Clapés, A., Escaler, M., Reyes, S.: Multi-modal user identification and object recognition surveillance, system. Pattern Recogn. Lett. 34(7), 799–808 (2013)
Rusu, R.B., Blodow, N., Beetz, M.: Fast point feature histograms (FPFH) for 3D registration. In: The IEEE International Conference on Robotics and Automation (ICRA), Kobe, Japan (2009)
Felzenszwalb, P.F., McAllester, D.A., Ramanan, D.: A discriminatively trained, multiscale, deformable part model. In: CVPR, pp. 1–8 (2008)
Ermes, M., Pärkkä, J., Mäntyjärvi, J., Korhonen, I.: Detection of daily activities and sports with wearable sensors in controlled and uncontrolled conditions. TITB 12(1), 20–26 (2008)
Ouchi, K., Suzuki, T., Doi, M.: A wearable healthcare support system using user’s context. In: Distributed Computing Systems, pp. 791–792 (2002)
Lichtenauer, J., Hendriks, E., Reinders, M.: Sign language recognition by combining statistical DTW and independent classification. IEEE Trans. Pattern Anal. Mach. Intell. 30(11), 2040–2046 (2008)
Jiang, S., Cao, Y., Iyengar, S. et al.: CareNet: An integrated wireless sensor networking environment for remote healthcare. In: Body Area Networks, pp. 9:1–9:3, (2010)
Vintsyuk, T.K.: Speech discrimination by dynamic programming. Kibernetika 4, 81–88 (1968)
Ming Hsiao, K., West, G., Venkatesh, S., Kumar, M.: Online context recognition in multisensor systems using dynamic time warping. In: ISNIPC, pp. 283–288 (2005)
Sakoe, H., Chiba, S.: Dynamic programming algorithm optimization for spoken word recognition. IEEE Trans. Acoust. Speech Signal Process. 26(1), 43–49 (1978)
Pansiot, J., Stoyanov, D., et al.: Ambient and wearable sensor fusion for activity recognition in healthcare monitoring systems. In: 4th International Workshop on Wearable and Implantable Body Sensor Networks, pp. 208–212 (2007)
Stiefmeier, T., Ogris, G., Junker, H., Lukowicz, P., Troster, G.,: Combining motion sensors and ultrasonic hands tracking for continuous activity recognition in a maintenance, scenario. In: Wearable Computers, 2006 10th IEEE International Symposium, pp. 97–104, (2006)
You, S., Neumann, U.: Fusion of vision and gyro tracking for robust augmented reality registration. In: Virtual Reality, pp. 71–78, (2001)
Zhu, C., Sheng, W.: Motion- and location-based online human daily activity recognition. Perv. Mob. Comput. 7, 256–269 (2011)
Acknowledgments
This work has been partly supported by RECERCAIXA 2011 Ref. REMEDI and TIN2009-14404-C02.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Pardo, À., Clapés, A., Escalera, S., Pujol, O. (2014). Actions in Context: System for People with Dementia. In: Nin, J., Villatoro, D. (eds) Citizen in Sensor Networks. CitiSens 2013. Lecture Notes in Computer Science(), vol 8313. Springer, Cham. https://doi.org/10.1007/978-3-319-04178-0_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-04178-0_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-04177-3
Online ISBN: 978-3-319-04178-0
eBook Packages: Computer ScienceComputer Science (R0)