Skip to main content

Advertisement

Log in

TwinEDA: a sustainable deep-learning approach for limb-position estimation in preterm infants’ depth images

  • Original Article
  • Published:
Medical & Biological Engineering & Computing Aims and scope Submit manuscript

Abstract

Early diagnosis of neurodevelopmental impairments in preterm infants is currently based on the visual analysis of newborns’ motion patterns by trained operators. To help automatize this time-consuming and qualitative procedure, we propose a sustainable deep-learning algorithm for accurate limb-pose estimation from depth images. The algorithm consists of a convolutional neural network (TwinEDA) relying on architectural blocks that require limited computation while ensuring high performance in prediction. To ascertain its low computational costs and assess its application in on-the-edge computing, TwinEDA was additionally deployed on a cost-effective single-board computer. The network was validated on a dataset of 27,000 depth video frames collected during the actual clinical practice from 27 preterm infants. When compared to the main state-of-the-art competitor, TwinEDA is twice as fast to predict a single depth frame and four times as light in terms of memory, while performing similarly in terms of Dice similarity coefficient (0.88). This result suggests that the pursuit of efficiency does not imply the detriment of performance. This work is among the first to propose an automatic and sustainable limb-position estimation approach for preterm infants. This represents a significant step towards the development of broadly accessible clinical monitoring applications.

Graphical abstract

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. https://www.who.int/news-room/fact-sheets/detail/preterm-birth

  2. https://onnx.ai/

  3. https://github.com/roccopietrini/pyPointAnnotator

  4. https://keras.io/

  5. https://vrai.dii.univpm.it/content/babypose-dataset

  6. https://www.regione.marche.it/Entra-in-Regione/Fondi-Europei/FESR/Programma-Operativo-Por-FESR

References

  1. Turpin H, Urben S, Ansermet F, Borghini A, Murray MM, Müller-Nix C (2019) The interplay between prematurity, maternal stress and children’s intelligence quotient at age 11: a longitudinal study. Sci Rep 9(1):1–9

    Article  CAS  Google Scholar 

  2. Gibbs R, Romero R, Hillier S, Eschenbach D, Sweet RL (1992) A review of premature birth and subclinical infection. Am J Obstet Gynecol 166(5):1515–1528

    Article  CAS  Google Scholar 

  3. DeMaster D, Bick J, Johnson U, Montroy JJ, Landry S, Duncan AF (2019) Nurturing the preterm infant brain: leveraging neuroplasticity to improve neurobehavioral outcomes. Pediatr Res 85(2):166–175

    Article  Google Scholar 

  4. Porro M, Fontana C, Giannì ML, Pesenti N, Boggini T, De Carli A, De Bon G, Lucco G, Mosca F, Fumagalli M et al (2020) Early detection of general movements trajectories in very low birth weight infants. Sci Rep 10(1):1–7

    Article  Google Scholar 

  5. Fontana C, Ottaviani V, Veneroni C, Sforza SE, Pesenti N, Mosca F, Picciolini O, Fumagalli M, Dellacà RL (2021) Front Pediatr 868

  6. Einspieler C, Prechtl HF, Ferrari F, Cioni G, Bos AF (1997) The qualitative assessment of general movements in preterm, term and young infants—review of the methodology. Early Hum Dev 50(1):47–60

    Article  CAS  Google Scholar 

  7. Moccia S, Migliorelli L, Pietrini R, Frontoni E (2019) Preterm infants’ limb-pose estimation from depth images using convolutional neural networks. In: 2019 IEEE Conf Comput Intell Bioinforma Comput Biol. pp 1–7. https://doi.org/10.1109/CIBCB.2019.8791242

  8. Moccia S, Migliorelli L, Carnielli V, Frontoni E (2020) Preterm infants’ pose estimation with spatio-temporal features. IEEE Trans Biomed Eng 67(8):2370–2380

    Article  Google Scholar 

  9. Chen J, Ran X (2019) Deep learning with edge computing: a review. Proc IEEE 107(8):1655–1674

    Article  Google Scholar 

  10. Cass S (2020) Nvidia makes it easy to embed AI: the Jetson Nano packs a lot of machine-learning power into DIY projects-[hands on]. IEEE Spectr 57(7):14–16

    Article  Google Scholar 

  11. Migliorelli L, Moccia S, Pietrini R, Carnielli VP, Frontoni E (2020) The babyPose dataset. Data Brief 33(106):329

    Google Scholar 

  12. Strubell E, Ganesh A, McCallum A (2020) Energy and policy considerations for modern deep learning research. Proc AAAI Conf Artif Intel 34:13693–13696

    Google Scholar 

  13. Rashid M, Khan MA, Alhaisoni M, Wang SH, Naqvi SR, Rehman A, Saba T (2020) A sustainable deep learning framework for object recognition using multi-layers deep features fusion and selection. Sustainability 12(12):5037

    Article  Google Scholar 

  14. Ascione R (2018) Il futuro della salute: come la tecnologia digitale sta rivoluzionando la medicina (e la nostra vita). Il futuro della salute, pp 1–270

  15. Giovanola B, Tiribelli S (2022) Weapons of moral construction? On the value of fairness in algorithmic decision-making. Ethics Inf Technol 24(1):1–13

    Article  Google Scholar 

  16. Fry KE, Chen YP, Howard A (2019) Discriminative models of spontaneous kicking movement patterns for term and preterm infants: a pilot study. IEEE Access 7:51357–51368

    Article  Google Scholar 

  17. Airaksinen M, Räsänen O, Ilén E, Häyrinen T, Kivi A, Marchi V, Gallen A, Blom S, Varhe A, Kaartinen N et al (2020) Automatic posture and movement tracking of infants with wearable movement sensors. Sci Rep 10(1):1–13

    Article  Google Scholar 

  18. Redd CB, Barber LA, Boyd RN, Varnfield M, Karunanithi MK (2019) Development of a wearable sensor network for quantification of infant general movements for the diagnosis of cerebral palsy. In: 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, pp 7134–7139

  19. Raghuram K, Orlandi S, Church P, Chau T, Uleryk E, Pechlivanoglou P, Shah V (2020) Automated movement recognition to predict motor impairment in high-risk infants: a systematic review of diagnostic test accuracy and meta-analysis. Dev Med Child Neurol

  20. Miyagishima S, Asaka T, Kamatsuka K, Kozuka N, Kobayashi M, Igarashi L, Hori T, Tsutsumi H (2018) Spontaneous movements of preterm infants is associated with outcome of gross motor development. Brain Dev 40(8):627–633

    Article  Google Scholar 

  21. Tsuji T, Nakashima S, Hayashi H, Soh Z, Furui A, Shibanoki T, Shima K, Shimatani K (2020) Markerless measurement and evaluation of general movements in infants. Sci Rep 10(1):1–13

    Article  Google Scholar 

  22. Capio CM, Sit CH, Abernethy B, Masters RS (2012) Fundamental movement skills and physical activity among children with and without cerebral palsy. Res Dev Disabil 33(4):1235–1241

    Article  Google Scholar 

  23. Marchi V, Hakala A, Knight A, D’Acunto F, Scattoni ML, Guzzetta A, Vanhatalo S (2019) Automated pose estimation captures key aspects of general movements at eight to 17 weeks from conventional videos. Acta Paediatr 108(10):1817–1824

    Article  Google Scholar 

  24. Ihlen EA, Støen R, Boswell L, de Regnier RA, Fjørtoft T, Gaebler-Spira D, Labori C, Loennecken MC, Msall ME, Möinichen UI et al (2020) Machine learning of infant spontaneous movements for the early prediction of cerebral palsy: a multi-site cohort study. J Clin Med 9(1):5

    Article  Google Scholar 

  25. McCay KD, Ho ES, Shum HP, Fehringer G, Marcroft C, Embleton ND (2020) Abnormal infant movements classification with deep learning on pose-based features. IEEE Access 8:51582–51592

    Article  Google Scholar 

  26. Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861

  27. Ran X, Chen H, Zhu X, Liu Z, Chen J (2018) Deepdecision: a mobile deep learning framework for edge video analytics. In: IEEE Conference on Computer Communications. IEEE, pp 1421–1429

  28. Wang F, Zhang M, Wang X, Ma X, Liu J (2020) Deep learning for edge computing applications: a state-of-the-art survey. IEEE Access 8:58322–58336

    Article  Google Scholar 

  29. Lo SY, Hang HM, Chan SW, Lin JJ (2019) Efficient dense modules of asymmetric convolution for real-time semantic segmentation. In: Proceedings of the ACM Multimedia Asia. pp 1–6

  30. Wang J, Xiong H, Wang H, Nian X (2020) ADSCNet: asymmetric depthwise separable convolution for semantic segmentation in real-time. Appl Intell 50(4):1045–1056

    Article  Google Scholar 

  31. Chen LC, Zhu Y, Papandreou G, Schroff F, Adam H (2018) Encoder-decoder with atrous separable convolution for semantic image segmentation. In: European Conference on Computer Vision. pp 801–818

  32. Huang G, Liu Z, Van DerMaaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: IEEE Conference on Computer Vision and Pattern Recognition. pp 4700–4708

  33. Adde L, Rygg M, Lossius K, Øberg GK, Støen R (2007) General movement assessment: predicting cerebral palsy in clinical practise. Early Hum Dev 83(1):13–18

    Article  Google Scholar 

  34. Bulat A, Tzimiropoulos G (2016) Human pose estimation via convolutional part heatmap regression. In: European Conference on Computer Vision. Springer, pp 717–732

  35. Fallang B, Saugstad OD, Grøgaard J, Hadders-Algra M (2003) Kinematic quality of reaching movements in preterm infants. Pediatr Res 53(5):836

    Article  Google Scholar 

  36. van Wynsberghe A (2021) Sustainable AI: AI for sustainability and the sustainability of AI. AI Ethics 1–6

  37. Schwartz R, Dodge J, Smith NA, Etzioni O (2020) Green AI. Commun ACM 63(12):54–63

    Article  Google Scholar 

  38. Dhar P (2020) The carbon impact of artificial intelligence. Nat Mach Intell 2(8):423–425

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to acknowledge “L’Oréal Italia per le donne e la scienza” in collaboration with “Commissione Nazionale Italiana per l’UNESCO” which partially supported the project.

Funding

This work was supported by the European Union through the grants System Improvement for Neonatal Care (SINC) and SINC 2 under the EU POR FESR funding program.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alessandro Cacciatore.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Migliorelli, L., Cacciatore, A., Ottaviani, V. et al. TwinEDA: a sustainable deep-learning approach for limb-position estimation in preterm infants’ depth images. Med Biol Eng Comput 61, 387–397 (2023). https://doi.org/10.1007/s11517-022-02696-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11517-022-02696-9

Keywords