Skip to main content
Log in

Comparison of the goals and MISTELS scores for the evaluation of surgeons on training benches

  • Original Article
  • Published:
International Journal of Computer Assisted Radiology and Surgery Aims and scope Submit manuscript

Abstract

Purpose

Evaluation of surgical technical abilities is a major issue in minimally invasive surgery. Devices such as training benches offer specific scores to evaluate surgeons but cannot transfer in the operating room (OR). A contrario, several scores measure performance in the OR, but have not been evaluated on training benches. Our aim was to demonstrate that the GOALS score, which can effectively grade in the OR the abilities involved in laparoscopy, can be used for evaluation on a laparoscopic testbench (MISTELS). This could lead to training systems that can identify more precisely the skills that have been acquired or must still be worked on.

Methods

32 volunteers (surgeons, residents and medical students) performed the 5 tasks of the MISTELS training bench and were simultaneously video-recorded. Their performance was evaluated with the MISTELS score and with the GOALS score based on the review of the recording by two experienced, blinded laparoscopic surgeons. The concurrent validity of the GOALS score was assessed using Pearson and Spearman correlation coefficients with the MISTELS score. The construct validity of the GOALS score was assessed with k-means clustering and accuracy rates. Lastly, abilities explored by each MISTELS task were identified with multiple linear regression.

Results

GOALS and MISTELS scores are strongly correlated (Pearson correlation coefficient = 0.85 and Spearman correlation coefficient = 0.82 for the overall score). The GOALS score proves to be valid for construction for the tasks of the training bench, with a better accuracy rate between groups of level after k-means clustering, when compared to the original MISTELS score (accuracy rates, respectively, 0.75 and 0.56).

Conclusion

GOALS score is well suited for the evaluation of the performance of surgeons of different levels during the completion of the tasks of the MISTELS training bench.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Acosta E, Temkin B (2005) Haptic laparoscopic skills trainer with practical user evaluation metrics. Stud Health Technol Inform 111:8–11

    PubMed  Google Scholar 

  2. Derossis AM, Fried GM, Abrahamowicz M, Sigman HH, Barkun JS, Meakins JL (1998) Development of a model for training and evaluation of laparoscopic skills. Am J Surg 175(6):482–7

    Article  CAS  PubMed  Google Scholar 

  3. Ahmed K, Miskovic D, Darzi A, Athanasiou T, Hanna GB (2011) Observational tools for assessment of procedural skills: a systematic review. Am J Surg 202(4):469–480

    Article  PubMed  Google Scholar 

  4. Frischknecht AC, Kasten SJ, Hamstra SJ, Perkins NC, Gillespie RB, Armstrong TJ, Minter RM (2013) The objective assessment of experts’ and novices’ suturing skills using an image analysis program. Acad Med 88(2):260–264

    Article  PubMed  Google Scholar 

  5. Mason JD, Ansell J, Warren N, Torkington J (2013) Is motion analysis a valid tool for assessing laparoscopic skill? Surg Endosc 27(5):1468–1477

    Article  PubMed  Google Scholar 

  6. Reiley CE, Lin HC, Yuh DD, Hager GD (2011) Review of methods for objective surgical skill evaluation. Surg Endosc 25(2):356–66

    Article  PubMed  Google Scholar 

  7. Derossis AM, Antoniuk M, Fried GM (1999) Evaluation of laparoscopic skills: a 2-year follow-up during residency training. Can J Surg 42(4):293–6

    CAS  PubMed  PubMed Central  Google Scholar 

  8. McCluney AL, Vassiliou MC, Kaneva PA, Cao J, Stanbridge DD, Feldman LS, Fried GM (2007) FLS simulator performance predicts intraoperative laparoscopic skill. Surg Endosc 21(11):1991–1995

    Article  CAS  PubMed  Google Scholar 

  9. Vassiliou MC, Feldman LS, Andrew CG, Bergman S, Leffondré K, Stanbridge D, Fried GM (2005) A global assessment tool for evaluation of intraoperative laparoscopic skills. Am J Surg 190(1):107–113

    Article  PubMed  Google Scholar 

  10. Field AP, Miles J, Field Z (2012) Discovering statistics using R. Sage, London

    Google Scholar 

  11. Fraser SA, Klassen DR, Feldman LS, Ghitulescu GA, Stanbridge D, Fried GM (2003) Evaluating laparoscopic skills. Surg Endosc 17(6):964–967

    Article  CAS  PubMed  Google Scholar 

  12. Likert R (1932) A technique for the measurement of attitudes. Arch Psychol 22:5–55

    Google Scholar 

  13. Cicchetti DV (1994) Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychol Assess 6(4):284–290

    Article  Google Scholar 

  14. MacQueen J (1967) Some methods for classification and analysis of multivariate observations. In: Berkeley symposium on mathematical statistics and probability, pp 281–297

  15. Eberly LE (2007) Multiple linear regression. In: Ambrosius WT (ed) Topics in biostatistics. Springer, pp 165–187

  16. Akaike H (1969) Fitting autoregressive models for prediction. Ann Inst Stat Math 21(1):243–247

    Article  Google Scholar 

  17. Zendejas B, Jakub JW, Terando AM, Sarnaik AM, Ariyan CE, Faries MB, Zani S, Neuman HB, Wasif N, Farma JM, Averbook BJ, Bilimoria KY, Tyler D, Brady MS, Farley DR (2017) Laparoscopic skill assessment of practicing surgeons prior to enrollment in a surgical trial of a new laparoscopic procedure. Surg Endosc 31(8):3313–3319

    Article  PubMed  Google Scholar 

  18. Gumbs AA, Hogle NJ, Fowler DL (2007) Evaluation of resident laparoscopic performance using global operative assessment of laparoscopic skills. J Am Coll Surg 204(2):308–313

    Article  PubMed  Google Scholar 

  19. Kramp KH, van Det MJ, Hoff C, Lamme B, Veeger NJGM, Pierie J-PEN (2010) Validity and reliability of global operative assessment of laparoscopic skills (GOALS) in novice trainees performing a laparoscopic cholecystectomy. J Surg Educ 72(2):351–8

    Article  Google Scholar 

  20. Cotin S, Stylopoulos S, Ottensmeyer M, Neumann M, Rattner D, Dawson S (2002) Metrics for laparoscopic skills trainers: the weakest link! In: International conference on medical image computing and computer-assisted intervention, pp 35–43

  21. Martin JA, Regeher G, Reznick R, Macrae H, Murnaghan J, Hutchinson C, Brown M (1997) Objective structured assessment of technical skill (OSATS) for surgical residents. Br J Surg 84(2):273–278

    Article  CAS  PubMed  Google Scholar 

  22. Chang L, Hogle NJ, Moore BB, Graham MJ, Sinanan MN, Bailey R, Fowler DL (2007) Reliable assessment of laparoscopic performance in the operating room using videotape analysis. Surg Innov 14(2):122–126

    Article  PubMed  Google Scholar 

  23. Hiemstra E (2011) Value of an objective assessment tool in the operating room. Can J Surg 54(2):116–122

    Article  PubMed  PubMed Central  Google Scholar 

  24. Tekkis PP, Senagore AJ, Delaney CP, Fazio VW (2005) Evaluation of the learning curve in laparoscopic colorectal surgery: comparison of right-sided and left-sided resections. Ann Surg 242(1):83–91

    Article  PubMed  PubMed Central  Google Scholar 

  25. Barrie J, Jayne DG, Wright J, Murray CJC, Collinson FJ, Pavitt SH (2014) Attaining surgical competency and its implications in surgical clinical trial design: a systematic review of the learning curve in laparoscopic and robot-assisted laparoscopic colorectal cancer surgery. Ann Surg Oncol 21(3):829–840

    Article  PubMed  Google Scholar 

  26. Van Hove C, Perry KA, Spight DH, Wheeler-Mcinvaille K, Diggs BS, Sheppard BC, Jobe BA, O’Rourke RW (2008) Predictors of technical skill acquisition among resident trainees in a laparoscopic skills education program. World J Surg 32(9):1917–1921

    Article  PubMed  Google Scholar 

  27. Lyons C, Goldfarb D, Jones SL, Badhiwala N, Miles B, Link R, Dunkin BJ (2013) Which skills really matter? Proving face, content, and construct validity for a commercial robotic simulator. Surg Endosc 27(6):2020–2030

    Article  PubMed  Google Scholar 

  28. Oropesa I, Sánchez-González P, Chmarra MK, Lamata P, Fernández Á, Sánchez-Margallo JA, Jansen FW, Dankelman J, Sánchez-Margallo FM, Gómez EJ (2013) EVA: laparoscopic instrument tracking based on endoscopic video analysis for psychomotor skills assessment. Surg Endosc 27(3):1029–1039

    Article  PubMed  Google Scholar 

  29. Judkins TN, Oleynikov D, Stergiou N (2009) Objective evaluation of expert and novice performance during robotic surgical training tasks. Surg Endosc 23(3):590–597

    Article  PubMed  Google Scholar 

  30. Rosen J, Brown JD, Chang L, Sinanan MN, Hannaford B (2006) Generalized approach for modeling minimally invasive surgery as a stochastic process using a discrete Markov model. IEEE Trans Biomed Eng 53(3):399–413

    Article  PubMed  Google Scholar 

  31. Leong JJH, Nicolaou M, Atallah L, Mylonas GP, Darzi AW, Yang G-Z (2006) HMM assessment of quality of movement trajectory in laparoscopic surgery. Springer, Berlin

    Book  Google Scholar 

  32. Megali G, Sinigaglia S, Tonet O, Dario P (2006) Modelling and evaluation of surgical performance using hidden markov models. IEEE Trans Biomed Eng 53(10):1911–1919

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

This work was partially supported by the French ANR within the Investissements d’Avenir program Labex Computer Assisted Medical Interventions (CAMI) under reference ANR-11-LABX-0004.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sandrine Voros.

Ethics declarations

Conflict of interest

Authors Long, Moreau-Gaudry, Cinquin and Voros are co-authors of a patent indirectly related to the research entitled “System and method for analysing a surgical operation by endoscopy” under reference EP2197384A2 (US20110046476 A1). The other authors have no conflicts of interest or financial ties to disclose.

Human and animals rights

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. This article does not contain any studies with animals performed by any of the authors.

Informed consent

Informed consent was obtained from all individual participants included in the study.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wolf, R., Medici, M., Fiard, G. et al. Comparison of the goals and MISTELS scores for the evaluation of surgeons on training benches. Int J CARS 13, 95–103 (2018). https://doi.org/10.1007/s11548-017-1645-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11548-017-1645-y

Keywords

Navigation