Skip to main content
Log in

HybNet: a hybrid network structure for pain intensity estimation

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Automatic pain intensity estimation has great potential in current rehabilitation medicine, and patients’ health status information can be obtained through the analysis of facial images. At present, deep convolutional neural networks (CNNs) have made great progress in many fields, including natural language processing, image classification and action recognition. Motivated by the current achievements, a novel end-to-end hybrid network is proposed to extract multidimensional features from image sequences, which is composed of 3D convolution, 2D convolution and 1D convolution. Specifically, the 3D convolutional neural network (3D CNN) is designed to capture the spatiotemporal features, and the 2D convolutional neural network (2D CNN) is designed to capture the spatial features, while the 1D convolutional neural network (1D CNN) is mainly used to capture the geometric information from facial landmarks. Finally, the features obtained by the three different networks are fused together for regression. The proposed HybNet is evaluated on UNBC-McMaster Shoulder Pain Expression Archive Database, and the experimental results show that it can effectively extract the discriminative high-level features and can achieve competitive performance with the state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Adibuzzaman, M., Ostberg, C., Ahamed, S., Povinelli, R., Sindhu, B., Love, R., Kawsar, F., Ahsan, G.M.T.: Assessment of pain using facial pictures taken with a smartphone. In: 2015 IEEE 39th Annual Computer Software and Applications Conference, vol. 2, pp. 726–731. IEEE (2015)

  2. Gholami, B., Haddad, W.M., Tannenbaum, A.R.: Agitation and pain assessment using digital imaging. Conference proceedings: ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. In: Conference IEEE Engineering in Medicine and Biology Society. 2009, 2176–2179 (2009)

  3. Brown, J.E., Chatterjee, N., Younger, J., Mackey, S.: Towards a physiology-based measure of pain: patterns of human brain activity distinguish painful from non-painful thermal stimulation. Plos One 6(9), e24124 (2011)

    Article  Google Scholar 

  4. Schulz, E., Zherdin, A., Tiemann, L., Plant, C., Ploner, M.: Decoding an individuals̈ sensitivity to pain from the multivariate analysis of EEG data. Cerebral Cortex 22(5), 1118–1123

  5. Roy, S., Roy, C., Éthier-Majcher, C., Fortin, I., Belin, P., Gosselin, F.: Stoic: a database of dynamic and static faces expressing highly recognizable emotions. See http://mapageweb.umontreal.ca/gosselif/cv.html (2007)

  6. Brahnam, S., Nanni, L., Sexton, R.: Introduction to neonatal facial pain detection using common and advanced face classification techniques. In: Advanced Computational Intelligence Paradigms in Healthcare–1, pp. 225–253. Springer (2007)

  7. Lucey, P., Cohn, J.F., Prkachin, K.M., Solomon, P.E., Matthews, I.: Painful data: The unbc-mcmaster shoulder pain expression archive database. In: Face and Gesture 2011, pp. 57–64. IEEE (2011)

  8. Walter, S., Gruss, S., Ehleiter, H., Tan, J., Traue, H.C., Werner, P., Al-Hamadi, A., Crawcour, S., Andrade, A.O., da Silva, G.M.: The biovid heat pain database data for the advancement and systematic validation of an automated pain recognition system. In: 2013 IEEE International Conference on Cybernetics (CYBCO), pp. 128–131. IEEE (2013)

  9. Ashraf, A.B., Lucey, S., Cohn, J.F., Chen, T., Ambadar, Z., Prkachin, K.M., Solomon, P.E.: The painful face-pain expression recognition using active appearance models. Image Vis. Comput. 27(12), 1788–1796 (2009)

    Article  Google Scholar 

  10. Lucey, P., Cohn, J.F., Matthews, I., Lucey, S., Sridharan, S., Howlett, J., Prkachin, K.M.: Automatically detecting pain in video through facial action units. IEEE Trans. Syst. Man Cybern. Part B (Cybernetics) 41(3), 664–674 (2010)

  11. Lucey, P., Cohn, J., Lucey, S., Matthews, I., Sridharan, S., Prkachin, K.M.: Automatically detecting pain using facial actions. In: 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, pp. 1–8. IEEE (2009)

  12. Lucey, P., Cohn, J., Lucey, S., Sridharan, S., Prkachin, K.M.: Automatically detecting action units from faces of pain: Comparing shape and appearance features. In: 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 12–18. IEEE (2009)

  13. Lucey, P., Cohn, J.F., Prkachin, K.M., Solomon, P.E., Chew, S., Matthews, I.: Painful monitoring: Automatic pain monitoring using the unbc-mcmaster shoulder pain expression archive database. Image Vis. Comput. 30(3), 197–205 (2012)

    Article  Google Scholar 

  14. Khan, R.A., Meyer, A., Konik, H., Bouakaz, S.: Pain detection through shape and appearance features. In: 2013 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE (2013)

  15. Pedersen, H.: Learning appearance features for pain detection using the unbc-mcmaster shoulder pain expression archive database. In: International Conference on Computer Vision Systems, pp. 128–136. Springer (2015)

  16. Rathee, N., Ganotra, D.: A novel approach for pain intensity detection based on facial feature deformations. J. Vis. Commun. Image Represent. 33, 247–254 (2015)

    Article  Google Scholar 

  17. Zafar, Z., Khan, N.A.: Pain intensity evaluation through facial action units. In: 2014 22nd International Conference on Pattern Recognition, pp. 4696–4701. IEEE (2014)

  18. Hammal, Z., Cohn, J.F.: Automatic detection of pain intensity. In: Proceedings of the 14th ACM International Conference on Multimodal Interaction, pp. 47–52. ACM (2012)

  19. Rudovic, O., Pavlovic, V., Pantic, M.: Automatic pain intensity estimation with heteroscedastic conditional ordinal random fields. In: International Symposium on Visual Computing, pp. 234–243. Springer (2013)

  20. Zhao, R., Gan, Q., Wang, S., Ji, Q.: Facial expression intensity estimation using ordinal information. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3466–3474 (2016)

  21. Kaltwang, S., Rudovic, O., Pantic, M.: Continuous pain intensity estimation from facial expressions. In: International Symposium on Visual Computing, pp. 368–377. Springer (2012)

  22. Florea, C., Florea, L., Vertan, C.: Learning pain from emotion: transferred hot data representation for pain intensity estimation. In: European Conference on Computer Vision, pp. 778–790. Springer (2014)

  23. Neshov, N., Manolova, A.: Pain detection from facial characteristics using supervised descent method. In: 2015 IEEE 8th International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), vol. 1, pp. 251–256. IEEE (2015)

  24. Hong, X., Zhao, G., Zafeiriou, S., Pantic, M., Pietikäinen, M.: Capturing correlations of local features for image representation. Neurocomputing 184, 99–106 (2016)

    Article  Google Scholar 

  25. Zhou, J., Hong, X., Su, F., Zhao, G.: Recurrent convolutional neural network regression for continuous pain intensity estimation in video. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 84–92 (2016)

  26. Rodriguez, P., Cucurull, G., Gonalez, J., Gonfaus, J.M., Nasrollahi, K., Moeslund, T.B., Roca, F.X.: Deep pain: Exploiting long short-term memory networks for facial expression classification. IEEE Trans. Cybern. 99, 1–11 (2017)

    Article  Google Scholar 

  27. Mauricio, A., Cappabianco, F., Veloso, A., Cámara, G.: A sequential approach for pain recognition based on facial representations. In: Computer Vision Systems, pp. 295–304. Springer International Publishing, Cham (2019)

  28. Tavakolian, M., Hadid, A.: Deep binary representation of facial expressions: A novel framework for automatic pain intensity recognition. In: 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 1952–1956 (2018). https://doi.org/10.1109/ICIP.2018.8451681

  29. Zhou, Y., Sun, X., Zha, Z.J., Zeng, W.: Mict: Mixed 3d/2d convolutional tube for human action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 449–458 (2018)

  30. Wu, J., Hu, D., Xiang, F., Yuan, X., Su, J.: 3d human pose estimation by depth map. The Visual Computer pp. 1–10 (2019)

  31. Liu, Z., Tang, J., Zhao, P.: Salient object detection via hybrid upsampling and hybrid loss computing. Vis. Comput. 36(4), 843–853 (2020)

    Article  Google Scholar 

  32. Chen, J., Liu, X., Tu, P., Aragones, A.: Person-specific expression recognition with transfer learning. In: 2012 19th IEEE International Conference on Image Processing, pp. 2621–2624. IEEE (2012)

  33. Xie, S., Sun, C., Huang, J., Tu, Z., Murphy, K.: Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018)

  34. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016). https://doi.org/10.1109/CVPR.2016.90

  35. Ekman, P., Friesen, W.: Facial action coding system: a technique for the measurement of facial movement. Consult. Psychol. Press Palo Alto 12, 274–280 (1978)

    Google Scholar 

  36. Kim, Y.: Convolutional neural networks for sentence classification. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1746–1751 (2014)

  37. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)

  38. Kingma, D., Ba, J.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations (2014)

  39. Zhang, Y., Zhao, R., Dong, W., Hu, B.G., Ji, Q.: Bilateral ordinal relevance multi-instance regression for facial action unit intensity estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7034–7043 (2018)

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under the grant 61871278.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Linbo Qing.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huang, Y., Qing, L., Xu, S. et al. HybNet: a hybrid network structure for pain intensity estimation. Vis Comput 38, 871–882 (2022). https://doi.org/10.1007/s00371-021-02056-y

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-021-02056-y

Keywords

Navigation