Skip to main content

Advertisement

Log in

Motion analysis of the JHU–ISI Gesture and Skill Assessment Working Set II: learning curve analysis

  • Original Article
  • Published:
International Journal of Computer Assisted Radiology and Surgery Aims and scope Submit manuscript

Abstract

Purpose

The Johns Hopkins–Intuitive Gesture and Skill Assessment Working Set (JIGSAWS) dataset is used to develop robotic surgery skill assessment tools, but there has been no detailed analysis of this dataset. The aim of this study is to perform a learning curve analysis of the existing JIGSAWS dataset.

Methods

Five trials were performed in JIGSAWS by eight participants (four novices, two intermediates and two experts) for three exercises (suturing, knot-tying and needle passing). Global Rating Scores and time, path length and movements were analyzed quantitatively and qualitatively by graphical analysis.

Results

There are no significant differences in Global Rating Scale scores over time. Time in the suturing exercise and path length in needle passing had significant differences. Other kinematic parameters were not significantly different. Qualitative analysis shows a learning curve only for suturing. Cumulative sum analysis suggests completion of the learning curve for suturing by trial 4.

Conclusions

The existing JIGSAWS dataset does not show a quantitative learning curve for Global Rating Scale scores, or most kinematic parameters which may be due in part to the limited size of the dataset. Qualitative analysis shows a learning curve for suturing. Cumulative sum analysis suggests completion of the suturing learning curve by trial 4. An expanded dataset is needed to facilitate subset analyses.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Martin JA, Regehr G, Reznick R, MacRae H, Murnaghan J, Hutchison C, Brown M (1997) Objective structured assessment of technical skill (OSATS) for surgical residents. Br J Surg 84:273–278

    CAS  PubMed  Google Scholar 

  2. Goh AC, Goldfarb DW, Sander JC, Miles BJ, Dunkin B (2012) Global evaluative assessment of robotic skills: validation of a clinical assessment tool to measure robotic surgical skills. J Urol 187:247–252

    Article  Google Scholar 

  3. van Hove PD, Tuijthof GJ, Verdaasdonk EG, Stassen LP, Dankelman J (2010) Objective assessment of technical surgical skills. Br J Surg 97:972–987

    Article  Google Scholar 

  4. Mazzon G, Sridhar A, Busuttil G, Thompson J, Nathan S, Briggs T, Kelly J, Shaw G (2017) Learning curves for robotic surgery: a review of the recent literature. Curr Urol Rep 18(11):89

    Article  Google Scholar 

  5. Uemura M, Tomikawa M, Kumashiro R, Miao T, Souzaki R, Ieiri S, Ohuchida K, Lefor AT, Hashizume M (2014) Analysis of hand motion differentiates expert and novice surgeons. J Surg Res 188:8–13

    Article  Google Scholar 

  6. Uemura M, Tomikawa M, Miao T, Souzaki R, Ieiri S, Akahoshi T, Lefor AK, Hashizume M (2018) Feasibility of an AI-based measure of the hand motions of expert and novice surgeons. Comput Math Methods Med 2018:9873273

    Article  Google Scholar 

  7. Dosis A, Bello F, Rockall T, Munz K, Moorthy S, Martin S and Darzi A. (2003) ROVIMAS: a software package for assessing surgical skills using the da Vinci telemanipulator system. The Fourth International Conference of Information Technology. (ITAB 2003) Birmingham, England

  8. Dosis A, Aggarwal R, Bello F, Moorthy K, Munz Y, Gillies D, Darzi A (2005) Synchronized video and motion analysis for the assessment of procedures in the operating theater. Arch Surg 140:293–299

    Article  Google Scholar 

  9. Mason JD, Ansell J, Warren N, Torkington J (2013) Is motion analysis a valid tool for assessing laparoscopic skill? Surg Endosc 27:1468–1477

    Article  Google Scholar 

  10. Kowalewski KF, Garrow CR, Schmidt MW, Benner L, Müller-Stich BP, Nickel F (2019) Sensor-based machine learning for workflow detection and as key to detect expert level in laparoscopic suturing and knot-tying. Surg Endosc 33(11):3732–3740

    Article  Google Scholar 

  11. Kowalewski KF, Hendrie JD, Schmidt MW, Garrow CR, Bruckner T, Proctor T, Paul S, Adigüzel D, Bodenstedt S, Erben A, Kenngott H, Erben Y, Speidel S, Müller-Stich BP, Nickel F (2017) Development and validation of a sensor- and expert model-based training system for laparoscopic surgery: the iSurgeon. Surg Endosc 31(5):2155–2165

    Article  Google Scholar 

  12. Dosis A. (2005) Modeling and assessment of surgical dexterity in laparoscopic and robotically assisted surgery using synchronized video-motion analysis and hidden Markov models. Dissertation. Imperial College London, University of London

  13. Aggarwal R, Grantcharov T, Moorthy K, Milland T, Papasavas P, Dosis A, Bello F, Darzi A (2007) An evaluation of the feasibility, validity, and reliability of laparoscopic skills assessment in the operating room. Ann Surg 245:992–999

    Article  Google Scholar 

  14. DiMaio S and Hasser C. (2008) The da Vinci research interface. In 2008 MICCAI Workshop on Systems and Arch. For Computer Assisted Interventions, Midas Journal. http://hdl.handle.net/10380/1464

  15. Hung AJ, Chen J, Jarc A, Hatcher D, Djaladat H, Gill IS (2018) Development and validation of objective performance metrics for robot-assisted radical prostatectomy: a pilot study. J Urol 199:296–304

    Article  Google Scholar 

  16. Fard MJ, Ameri S, Darin Ellis R, Chinnam RB, Pandya AK, Klein MD (2018) Automated robot-assisted surgical skill evaluation: predictive analytics approach. Int J Med Robot 14:1–1850 (Epub 2017 Jun 29)

    Article  Google Scholar 

  17. Judkins TN, Oleynikov D, Stergiou N (2009) Objective evaluation of expert and novice performance during robotic surgical training tasks. Surg Endosc 23:590–597

    Article  Google Scholar 

  18. Gao Y, Vedula SS, Reiley CE, Ahmidi N, Varadarajan B, Liu H, Tao L, Zappella L, Bejar B, Yuh D, Chen CCG, Vidal R, Khudanpur S and Hager G. (2014) JHU-ISI gesture and skill assessment working set (JIGSAWS): a surgical activity dataset for human motion modeling. MICCAI Workshop 2014 M2CAI Vol 3

  19. Forestier G, Petitjean F, Senin P, Despinoy F, Huaulmé A, Fawaz HI, Weber J, Idoumghar L, Muller PA, Jannin P (2018) Surgical motion analysis using discriminative interpretable patterns. Artif Intell Med 91:3–11

    Article  Google Scholar 

  20. Ahmidi N, Tao L, Sefati S, Gao Y, Lea C, Haro BB, Zappella L, Khudanpur S, Vidal R, Hager GD (2017) A dataset and benchmarks for segmentation and recognition of gestures in robotic surgery. IEEE Trans Biomed Eng 64:2025–2041

    Article  Google Scholar 

  21. Funke I, Mees ST, Weitz J, Speidel S (2019) Video-based surgical skill assessment using 3D convolutional neural networks. Int J Comput Assist Radiol Surg 14:1217–1225

    Article  Google Scholar 

  22. Zia A, Essa I (2018) Automated surgical skill assessment in RMIS training. Int J Comput Assist Radiol Surg 13:731–739

    Article  Google Scholar 

  23. Wang Z, Majewicz FA (2018) Deep learning with convolutional neural network for objective skill evaluation in robot-assisted surgery. Int J Comput Assist Radiol Surg 13:1959–1970

    Article  Google Scholar 

  24. Ismail Fawaz H, Forestier G, Weber J, Idoumghar L, Muller PA (2019) Accurate and interpretable evaluation of surgical skills from kinematic data using fully convolutional neural networks. Int J Comput Assist Radiol Surg 14:1611–1617

    Article  Google Scholar 

  25. Nguyen XA, Ljuhar D, Pacilli M, Nataraja RM, Chauhan S (2019) Surgical skill levels: classification and analysis using deep neural network model and motion signals. Comput Methods Programs Biomed 2019(177):1–8

    Article  Google Scholar 

  26. Anh NX, Nataraja RM, Chauhan S (2020) Towards near real-time assessment of surgical skills: a comparison of feature extraction techniques. Comput Methods Programs Biomed 187:105234. https://doi.org/10.1016/j.cmpb.2019.105234 (PMID: 31794913)

    Article  PubMed  Google Scholar 

  27. Sorensen MD, Delostrinos C, Johnson MH, Grady RW, Lendvay TS (2011) Comparison of the learning curve and outcomes of robotic assisted pediatric pyeloplasty. J Urol 185(6 Suppl):2517–2522

    Article  Google Scholar 

  28. Woelk JL, Casiano ER, Weaver AL, Gostout BS, Trabuco EC, Gebhart JB (2013) The learning curve of robotic hysterectomy. Obstet Gynecol 121:87–95

    Article  Google Scholar 

  29. Vilallonga R, Fort JM, Gonzalez O, Caubet E, Boleko A, Neff KJ, Armengol M (2012) The initial learning curve for robot-assisted sleeve gastrectomy: a surgeon’s experience while introducing the robotic technology in a bariatric surgery department. Minim Invasive Surg 2012:347131

    PubMed  PubMed Central  Google Scholar 

  30. Hernandez JD, Bann SD, Munz Y, Moorthy K, Datta V, Martin S, Dosis A, Bello F, Darzi A, Rockall T (2004) Qualitative and quantitative analysis of the learning curve of a simulated surgical task on the da Vinci system. Surg Endosc 18:372–378

    Article  CAS  Google Scholar 

  31. Leijte E, de Blaauw I, Van Workum F, Rosman C, Botden S (2020) Robot assisted versus laparoscopic suturing learning curve in a simulated setting. Surg Endosc 34(8):3679–3689

    Article  Google Scholar 

  32. Duran C, Estrada S, O’Malley M, Lumsden AB, Bismuth J (2015) Kinematics effectively delineate accomplished users of endovascular robotics with a physical training model. J Vasc Surg 61:535–541

    Article  Google Scholar 

  33. Bustos R, Mangano A, Gheza F, Chen L, Aguiluz-Cornejo G, Gangemi A, Sanchez-Johnsen L, Hassan C, Masrur M (2019) Robotic-assisted Roux-en-Y Gastric bypass: learning curve assessment using cumulative sum and literature review. Bariatr Surg Pract Patient Care 14:95–101

    Article  Google Scholar 

  34. Waller HM, Connor SJ (2009) Cumulative sum (Cusum) analysis provides an objective measure of competency during training in endoscopic retrograde cholangio-pancreatography (ERCP). HPB (Oxford) 11:565–569

    Article  Google Scholar 

  35. Weil G, Motamed C, Biau DJ, Guye ML (2017) Learning curves for three specific procedures by anesthesiology residents using the learning curve cumulative sum (LC-CUSUM) test. Korean J Anesthesiol 70:196–202

    Article  Google Scholar 

  36. Wani S, Coté GA, Keswani R, Mullady D, Azar R, Murad F, Edmundowicz S, Komanduri S, McHenry L, Al-Haddad MA, Hall M, Hovis CE, Hollander TG, Early D (2013) Learning curves for EUS by using cumulative sum analysis: implications for American society for gastrointestinal endoscopy recommendations for training. Gastrointest Endosc 77:558–565

    Article  Google Scholar 

  37. Lefor AL, Harada K, Dosis A, Mitsuishi M (2020) Motion analysis of the JHU-ISI gesture and skill assessment working set using robotics video and motion assessment software. Int J Comput Assist Radiol Surg 15(12):2017–2025

    Article  Google Scholar 

  38. Datta V, Chang A, Mackay S, Darzi A (2002) The relationship between motion analysis and surgical technical assessments. Am J Surg 184:70–73

    Article  Google Scholar 

  39. JIGSAWS: The JHU-ISI Gesture and Skill Assessment Working Set. Accessed 2019–07–10. https://cirl.lcsr.jhu.edu/research/hmm/datasets/jigsaws_release

  40. Nisky I, Okamura AM, Hsieh MH (2014) Effects of robotic manipulators on movements of novices and surgeons. Surg Endosc 28:2145–2158

    Article  Google Scholar 

  41. Wilcoxon signed-ranks test calculator, https://www.socscistatistics.com/tests/signedranks/default.aspx, accessed 27 December 2020

  42. Hung AJ, Chen J, Gill IS (2018) Automated performance metrics and machine learning algorithms to measure surgeon performance and anticipate clinical outcomes in robotic surgery. JAMA Surg 153(8):770–771. https://doi.org/10.1001/jamasurg.2018.1512 (PMID: 29926095)

    Article  PubMed  Google Scholar 

  43. Hung AJ, Oh PJ, Chen J, Ghodoussipour S, Lane C, Jarc A, Gill IS (2019) Experts vs super-experts: differences in automated performance metrics and clinical outcomes for robot-assisted radical prostatectomy. BJU Int 123(5):861–868. https://doi.org/10.1111/bju.14599 (PMID: 30358042)

    Article  PubMed  Google Scholar 

Download references

Acknowledgement

The contributions of Murilo Marinho PhD are gratefully acknowledged.

Funding

This work was supported by JSPS KAKENHI Grant Number 19H05585.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alan Kawarai Lefor.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

This study did not involve any human or animal subjects. There is no informed consent. This is a review of published data.

Data availability

All data are available online [38].

Code availability

The software used to convert data from the JIGSAWS data to the format used by ROVIMAS is available on request from the author.

Informed consent

Informed consent was obtained from all individual participants included in the study.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lefor, A.K., Harada, K., Dosis, A. et al. Motion analysis of the JHU–ISI Gesture and Skill Assessment Working Set II: learning curve analysis. Int J CARS 16, 589–595 (2021). https://doi.org/10.1007/s11548-021-02339-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11548-021-02339-8

Keywords

Navigation