Abstract
This paper presents a proposal for object detection as a first stage for the analysis of Human-Object Interaction (HOI) in the context of automated functional assessment. The proposed system is based in a two-step strategy, thus, in the first stage there are detected the people in the scene, as well as large objects (table, chairs, etc.) using a pre-trained YOLOv8. Then, there is defined a ROI around each person that is processed using a custom YOLO to detect small elements (forks, plates, spoons, etc.). Since there are no large image datasets that include all the objects of interest, there has also been compiled a new dataset including images from different sets, and improving the available labels. The proposal has been evaluated in the novel dataset, and in different images acquired in the area in which the functional assessment is performed, obtaining promising results.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Eyeful project webpage. https://www.geintra-uah.org/eyeful/es/informaci’on. Accessed 9 Jan 2023
ImageNet dataset webpage. https://www.image-net.org/. Accessed 2 Feb 2023
Intel RealSense D455 camera webpage. https://www.intelrealsense.com/depth-camera-d455/. Accessed 17 Jan 2023
Stereolabs: ZED 2i camera webpage. https://www.stereolabs.com/zed-2i/. Accessed 17 Jan 2023
As’ari, M.A., Sheikh, U.U., Supriyanto, E.: XZ-shape histogram for human-object interaction activity recognition based on kinect-like depth image. WSEAS Trans. Sig. Process. Arch. 10, 382–391 (2014)
Damen, D., et al.: Scaling egocentric vision: the dataset. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11208, pp. 753–771. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01225-0_44
Das, S., et al.: Toyota smarthome: real-world activities of daily living. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 833–842 (2019)
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)
Fisher, A.G., Jones, K.B.: Assessment of Motor and Process Skills: Development, Standardization, and Administration Manual. Three Star Press, Incorporated (2001)
Girshick, R.: Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1440–1448 (2015)
Gkioxari, G., Girshick, R., Dollár, P., He, K.: Detecting and recognizing human-object interactions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8359–8367 (2018)
Gupta, A., Dollar, P., Girshick, R.: LVIS: a dataset for large vocabulary instance segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2019)
He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969 (2017)
Jocher, G., Chaurasia, A., Qiu, J.: YOLO by Ultralytics, January 2023. https://github.com/ultralytics/ultralytics
Kuznetsova, A., et al.: The Open Images Dataset V4: unified image classification, object detection, and visual relationship detection at scale. Int. J. Comput. Vis. 128(7), 1956–1981 (2020)
Li, F., Wang, S., Wang, S., Zhang, L.: Human-object interaction detection: a survey of deep learning-based methods. In: Fang, L., Povey, D., Zhai, G., Mei, T., Wang, R. (eds.) Artificial Intelligence. CICAI 2022. Lecture Notes in Computer Science, vol. 13604. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-20497-5_36
Liao, Y., Liu, S., Wang, F., Chen, Y., Qian, C., Feng, J.: PPDM: parallel point detection and matching for real-time human-object interaction detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 482–490 (2020)
Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48
Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)
Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6517–6525 (2017)
Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, vol. 28 (2015)
Ulutan, O., Iftekhar, A., Manjunath, B.S.: VSGNet: spatial attention network for detecting human object interactions using graph convolutions. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13617–13626 (2020)
Wang, H., Zheng, W., Yingbiao, L.: Contextual heterogeneous graph network for human-object interaction detection. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12362, pp. 248–264. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58520-4_15
Wang, T., Yang, T., Danelljan, M., Khan, F.S., Zhang, X., Sun, J.: Learning human-object interaction detection using interaction points. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4116–4125 (2020)
Acknowledgements
This work has been partially supported by the Spanish Ministry of Science and Innovation MICINN/AEI/10.13039/501100011033 under projects EYEFUL-UAH (PID2020-113118RB-C31) and ATHENA (PID2020-115995RB-I00), by CAM under project CONCORDIA (CM/JIN/2021-015), and by UAH under projects ARGOS+ (PIUAH21/IA-016) and METIS (PIUAH22/IA-037).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Melino-Carrero, A., Suárez, Á.N., Losada-Gutierrez, C., Marron-Romera, M., Luna, I.G., Baeza-Mas, J. (2023). Object Detection for Functional Assessment Applications. In: Iliadis, L., Maglogiannis, I., Alonso, S., Jayne, C., Pimenidis, E. (eds) Engineering Applications of Neural Networks. EANN 2023. Communications in Computer and Information Science, vol 1826. Springer, Cham. https://doi.org/10.1007/978-3-031-34204-2_28
Download citation
DOI: https://doi.org/10.1007/978-3-031-34204-2_28
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-34203-5
Online ISBN: 978-3-031-34204-2
eBook Packages: Computer ScienceComputer Science (R0)