Skip to main content

Deep Learning Architectures for Pain Recognition Based on Physiological Signals

  • Conference paper
  • First Online:
Pattern Recognition, Computer Vision, and Image Processing. ICPR 2022 International Workshops and Challenges (ICPR 2022)

Abstract

The overall classification performance as well as generalization ability of a traditional information fusion architecture (built upon so called handcrafted features) is limited by its reliance on specific expert knowledge in the underlying domain of application. The integration of both feature engineering and fusion parameters’ optimization in a single optimization process using deep neural networks has shown in several domains of application (e.g. computer vision) its potential to significantly improve not just the inference performance of a classification system, but also its ability to generalize and adapt to unseen but related domains. This is done by enabling the designed system to autonomously detect, extract and combine relevant information directly from the raw signals accordingly to the classification task at hand. The following work focuses specifically on pain recognition based on bio-physiological modalities and consists of a summary of recently proposed deep fusion approaches for the aggregation of information stemming from a diverse set of physiological signals in order to perform an accurate classification of several levels of artificially induced pain intensities.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Kuncheva, L.I.: Combining Pattern Classifiers: Methods and Algorithms. EBL-Schweitzer, Wiley (2014). https://books.google.de/books?id=MZgtBAAAQBAJ

  2. Gao, J., Li, P., Chen, Z., Zhang, J.: A survey on deep learning for multimodal data fusion. Neural Comput. 32(5), 829–864 (2020). https://doi.org/10.1162/neco_a_01273

    Article  MathSciNet  MATH  Google Scholar 

  3. Zhang, Y., Sidibé, D., Morel, O., Mériaudeau, F.: Deep multimodal fusion for semantic image segmentation: a survey. Image Vis. Comput. 105, 104042 (2021). https://doi.org/10.1016/j.imavis.2020.104042

    Article  Google Scholar 

  4. Roitberg, A., Pollert, T., Haurilet, M., Martin, M., Stiefelhagen, R.a.: Analysis of Deep Fusion Strategies for Multi-Modal Gesture Recognition. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 198–206 (2019). https://doi.org/10.1109/CVPRW.2019.00029

  5. Farahnakian, F., Heikkonen, J.: Deep Learning Applications, Volume 3, chap. RGB and Depth Image Fusion for Object Detection Using Deep Learning, pp. 73–93. Springer, Singapore (2022). https://doi.org/10.1007/978-981-16-3357-7_3

  6. Zhang, Y., Wang, Z.R., Du, J.: Deep fusion: an attention guided factorized bilinear pooling for audio-video emotion recognition. In: 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2019). https://doi.org/10.1109/IJCNN.2019.8851942

  7. Praveen, R.G., et al.: A joint cross-attention model for audio-visual fusion in dimensional emotion recognition. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 2485–2494 (2022). https://doi.org/10.1109/CVPRW56347.2022.00278

  8. Nguyen, D., Nguyen, K., Sridharan, S., Dean, D., Fookes, C.: Deep spatio-temporal feature fusion with compact bilinear pooling for multimodal emotion recognition. Comput. Vis. Image Underst. 174, 33–42 (2018). https://doi.org/10.1016/j.cviu.2018.06.005

    Article  Google Scholar 

  9. Li, X., Song, D., Zhang, P., Hou, Y., Hu, B.: Deep fusion of multi-channel neurophysiological signal for emotion recognition and monitoring. Int. J. Data Min. Bioinform. 18(1), 1–27 (2017). https://doi.org/10.1504/IJDMB.2017.086097

    Article  Google Scholar 

  10. Hawker, G.A., Mian, S., Kendzerska, T., French, M.: Measures of adult pain: Visual Analog Scale for Pain (VAS Pain), Numeric Rating Scale for Pain (NRS Pain), McGill Pain Questionnaire (MPQ), Short-Form McGill Pain Questionnaire (SF-MPQ), Chronic Pain Grade Scale (CPGS), Short Form-36 Bodily Pain Scale (SF-36 BPS), and Measure of Intermittent and Constant Osteoarthritis Pain (ICOAP). Arthritis Care Res. 63(S11), S240–S252 (2011). https://doi.org/10.1002/acr.20543

    Article  Google Scholar 

  11. Eckard, C., et al.: The integration of technology into treatment programs to aid in the reduction of chronic pain. J. Pain Manage. Med. 2(3), 118 (2016). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5279929/

  12. Werner, P., Al-Hamadi, A., Limbrecht-Ecklundt, K., Walter, S., Gruss, S., Traue, H.C.: Automatic pain assessment with facial activity descriptors. IEEE Trans. Affect. Comput. 8(3), 286–299 (2017). https://doi.org/10.1109/TAFFC.2016.2537327

    Article  Google Scholar 

  13. Thiam, P., Kessler, V., Schwenker, F.: Hierarchical combination of video features for personalised pain level recognition. In: 25th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, pp. 465–470, April 2017. https://www.esann.org/sites/default/files/proceedings/legacy/es2017-104.pdf

  14. Thiam, P., Schwenker, F.: Combining deep and hand-crafted features for audio-based pain intensity classification. In: Schwenker, F., Scherer, S. (eds.) MPRSS 2018. LNCS (LNAI), vol. 11377, pp. 49–58. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20984-1_5

    Chapter  Google Scholar 

  15. Thiam, P., Kestler, H.A., Schwenker, F.: Two-stream attention network for pain recognition from video sequences. Sensors 20(839) (2020). https://doi.org/10.3390/s20030839

  16. Tsai, F.S., Hsu, Y.L., Chen, W.C., Weng, Y.M., Ng, C.J., Lee, C.C.: Toward development and evaluation of pain level-rating scale for emergency triage based on vocal characteristics and facial expressions. In: Interspeech 2016, pp. 92–96 (2016). https://doi.org/10.21437/Interspeech. 2016–408

  17. Martinez, D.L., Picard, R.W.: Multi-task neural networks for personalized pain recognition from physiological signals. CoRR abs/1708.08755 (2017). http://arxiv.org/abs/1708.08755

  18. Bellmann, P., Thiam, P., Schwenker, F.: Multi-classifier-systems: architectures, algorithms and applications. In: Pedrycz, W., Chen, S.-M. (eds.) Computational Intelligence for Pattern Recognition. SCI, vol. 777, pp. 83–113. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-89629-8_4

    Chapter  Google Scholar 

  19. Bellmann, P., Thiam, P., Schwenker, F.: Using a quartile-based data transformation for pain intensity classification based on the SenseEmotion database. In: 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), pp. 310–316 (2019). https://doi.org/10.1109/ACIIW.2019.8925244

  20. Walter, S., et al.: The BioVid heat pain database: data for the advancement and systematic validation of an automated pain recognition system. In: 2013 IEEE International Conference on Cybernetics, pp. 128–131 (2013). https://doi.org/10.1109/CYBConf.2013.6617456

  21. Velana, M., et al.: The SenseEmotion database: a multimodal database for the development and systematic validation of an automatic pain- and emotion-recognition system. In: Schwenker, F., Scherer, S. (eds.) MPRSS 2016. LNCS (LNAI), vol. 10183, pp. 127–139. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59259-6_11

    Chapter  Google Scholar 

  22. Haque, M.A., et al.: Deep multimodal pain recognition: a database and comparison of spatio-temporal visual modalities. In: 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), pp. 250–257 (2018). https://doi.org/10.1109/FG.2018.00044

  23. Gruss, S., et al.: Multi-modal signals for analyzing pain responses to thermal and electrical stimuli. J. Visualized Exp. (JoVE) (146), e59057 (2019). https://doi.org/10.3791/59057

  24. Zhang, Z., et al.: Multimodal spontaneous emotion corpus for human behavior analysis. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3438–3446 (2016). https://doi.org/10.1109/CVPR.2016.374

  25. Werner, P., Al-Hamadi, A., Niese, R., Walter, S., Gruss, S., Traue, H.C.: Automatic pain recognition from video and biomedical signals. In: 2014 22nd International Conference on Pattern Recognition, pp. 4582–4587 (2014). https://doi.org/10.1109/ICPR.2014.784

  26. Walter, S., et al.: Automatic pain quantification using autonomic parameters. Psychol. Neurosci. 7(3), 363–380 (2014). https://doi.org/10.3922/j.psns.2014.041

    Article  Google Scholar 

  27. Kächele, M., Werner, P., Al-Hamadi, A., Palm, G., Walter, S., Schwenker, F.: Bio-visual fusion for person-independent recognition of pain intensity. In: Schwenker, F., Roli, F., Kittler, J. (eds.) MCS 2015. LNCS, vol. 9132, pp. 220–230. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-20248-8_19

    Chapter  Google Scholar 

  28. Kächele, M., et al.: Multimodal data fusion for person-independent, continuous estimation of pain intensity. In: Iliadis, L., Jayne, C. (eds.) EANN 2015. CCIS, vol. 517, pp. 275–285. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-23983-5_26

    Chapter  Google Scholar 

  29. Kächele, M., Thiam, P., Amirian, M., Schwenker, F., Palm, G.: Methods for person-centered continuous pain intensity assessment from bio-physiological channels. IEEE J. Sel. Top. Signal Process. 10(5), 854–864 (2016). https://doi.org/10.1109/JSTSP.2016.2535962

    Article  Google Scholar 

  30. Thiam, P., et al.: Multi-modal pain intensity recognition based on the SenseEmotion database. IEEE Trans. Affective Comput. (2019). https://doi.org/10.1109/TAFFC.2019.2892090, 2019 IEEE

  31. Thiam, P., Kessler, V., Walter, S., Palm, G., Schwenker, F.: Audio-visual recognition of pain intensity. In: Schwenker, F., Scherer, S. (eds.) MPRSS 2016. LNCS (LNAI), vol. 10183, pp. 110–126. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59259-6_10

    Chapter  Google Scholar 

  32. Kessler, V., Thiam, P., Amirian, M., Schwenker, F.: Multimodal fusion including camera photoplethysmography for pain recognition. In: 2017 International Conference on Companion Technology (ICCT), pp. 1–4 (2017). https://doi.org/10.1109/COMPANION.2017.8287083

  33. Thiam, P., Bellmann, P., Kestler, H.A., Schwenker, F.: Exploring deep physiological models for nociceptive pain recognition. Sensors 4503(20) (2019). https://doi.org/10.3390/s19204503

  34. Thiam, P., Kestler, H.A., Schwenker, F.: Multimodal deep denoising convolutional autoencoders for pain intensity classification based on physiological signals. In: Proceedings of the 9th International Conference on Pattern Recognition Applications and Methods (ICPRAM), vol. 1, pp. 289–296. INSTICC, SciTePress (2020). https://doi.org/10.5220/0008896102890296

  35. Thiam, P., Hihn, H., Braun, D.A., Kestler, H.A., Schwenker, F.: Multi-modal pain intensity assessment based on physiological signals: a deep learning perspective. Front. Physiol. 12, 720464 (2021). https://doi.org/10.3389/fphys.2021.720464

  36. Antoniou, A., Storkey, A., Edwards, H.: Data Augmentation Generative Adversarial Networks. arXiv (2017). https://arxiv.org/abs/1711.04340

Download references

Acknowledgments

We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used for this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Patrick Thiam .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Thiam, P., Kestler, H.A., Schwenker, F. (2023). Deep Learning Architectures for Pain Recognition Based on Physiological Signals. In: Rousseau, JJ., Kapralos, B. (eds) Pattern Recognition, Computer Vision, and Image Processing. ICPR 2022 International Workshops and Challenges. ICPR 2022. Lecture Notes in Computer Science, vol 13643. Springer, Cham. https://doi.org/10.1007/978-3-031-37660-3_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-37660-3_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-37659-7

  • Online ISBN: 978-3-031-37660-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics