Skip to main content
Log in

Stereovision based force estimation with stiffness mapping in surgical tool insertion using recurrent neural network

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

This paper proposes a novel method for estimating the reaction force, the Stereovision Based Force Estimation Method (SBFEM), with deep learning techniques to predict the interaction force of the surgical procedures. The interaction force is estimated through SBFEM combined with computer vision and neural networks instead of using direct force sensors due to the difficulty of adapting them to tools due to biocompatibility, sterilizability, and integration issues. The proposed model processes both spatial and temporal information acquired from the vision and tool data. The LSTM-RNN framework, along with dimensionality reduction, is trained with In-Vivo-experimental data of porcine skin, and the cyclical learning rate method is suggested for fine-tuning the network. The analyses are based on three distinct datasets, each with three cases to validate the result. The proposed method, RNN-LSTM + DR + CLR, outperforms the RNN and RNN-LSTM without dimensionality reduction by 8.46% and 3.98% in force prediction accuracy, respectively. Interestingly, this work reports an average RMSE of 0.01 N in the force component and 0.03Nm in the torque component in the applied force direction. The result shows that estimated force quality is better when reducing dimensionality on extracted features and processing both tool and vision data together. The network performed better when optimized with the loss function, root mean square error, and the cyclical learning rate method as an optimizer for fewer datasets with a minimum computational cost. Finally, the Mann–Whitney U test shows that the predicted force components are adaptable to any dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

  1. Spinoglio G, A. Marano, and Formisano G (2015) Robotic surgery: current applications and new trends.

  2. Diana M, Marescaux J (2015) Robotic surgery. BJS 102:e15–e28

    Article  Google Scholar 

  3. Marban A, Casals A, Fernandez J and Amat J (2014) Haptic feedback in surgical robotics: still a challenge. vol. 252, pp 245–253

  4. Kuebler B, Seibold U and Hirzinger G (2005) Development of actuated and sensor integrated forceps for minimally invasive robotic surger. The International Journal of Medical Robotics and Computer Assisted Surgery, vol. 1, pp 96–107, 2005/09/01

  5. Haouchine N, Kuang W, Cotin S, Yip M (2018) Vision-based force feedback estimation for robot-assisted surgery using instrument-constrained biomechanical three-dimensional maps. IEEE Robot Autom Lett 3:2160–2165

    Article  Google Scholar 

  6. Aviles AI, Alsaleh S, Sobrevilla P and Casals A (2015) Sensorless force estimation using a neuro-vision-based approach for robotic-assisted surgery. In 2015 7th International IEEE/EMBS Conference on Neural Engineering (NER), pp 86–89

  7. Lee D, Kim U, Gulrez T, Yoon WJ, Hannaford B, Choi HR (2016) A laparoscopic grasping tool with force sensing capability. IEEE/ASME Trans Mechatron 21:130–141

    Google Scholar 

  8. Hannaford B, Rosen J, Friedman DW, King H, Roan P, Cheng L et al (2013) Raven-II: an open platform for surgical robotics research. IEEE Trans Biomed Eng 60:954–959

    Article  Google Scholar 

  9. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9:1735–1780

    Article  Google Scholar 

  10. Hermans M and Schrauwen B (2013) Training and analyzing deep recurrent neural networks. Advances in Neural Information Processing Systems, 01/01

  11. Lendvay TS, Hannaford B and Satava RM (2013) Future of robotic surgery. The Cancer Journal, vol. 19

  12. Greminger MA, Nelson BJ (2004) Vision-based force measurement. IEEE Trans Pattern Anal Mach Intell 26:290–298

    Article  Google Scholar 

  13. Kim J, Janabi-Sharifi F, Kim J (2010) A haptic interaction method using visual information and physically based modeling. IEEE/ASME Trans Mechatron 15:636–645

    Article  Google Scholar 

  14. Ammi M, Ladjal H and Ferreira A (2006) Evaluation of 3D pseudo-haptic rendering using vision for cell micromanipulation. In 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems pp 2115-2120

  15. Kerdok AE, Cotin SM, Ottensmeyer MP, Galea AM, Howe RD and Dawson SL (2003) Truth cube: Establishing physical standards for soft tissue simulation. Medical Image Analysis, vol. 7, pp 283–291 2003/09/01

  16. Karimirad F, Chauhan S and Shirinzadeh B (2014) Vision-based force measurement using neural networks for biological cell microinjection. Journal of Biomechanics, vol. 47, pp 1157–1163 2014/03/21

  17. Greminger MA and Nelson BJ (2003) Modeling elastic objects with neural networks for vision-based force measurement. In Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453), vol. 2 pp 1278–1283

  18. Mozaffari A, Behzadipour S and Kohani M (2014) Identifying the tool-tissue force in robotic laparoscopic surgery using neuro-evolutionary fuzzy systems and a synchronous self-learning hyper level supervisor. Applied Soft Computing, vol. 14, pp 12–30 2014/01/01

  19. Park SJ, Kim BG, and Chilamkurti N (2021) A robust facial expression recognition algorithm based on multi-rate feature fusion scheme. Sensors, vol. 21

  20. Kim JH, Kim BG, Roy PP, Jeong DM (2019) Efficient facial expression recognition algorithm based on hierarchical deep neural network structure. IEEE Access 7:41273–41285

    Article  Google Scholar 

  21. Kennedy C and Desai J (2005) A vision-based approach for estimating contact forces: applications to robot-assisted surgery. Applied Bionics and Biomechanics, vol. 2, pp 53–60, 01/01

  22. Kim W, Seung S, Choi H, Park S, Ko SY and Park J (2012) Image-based force estimation of deformable tissue using depth map for single-port surgical robot. 2012 12th International Conference on Control, Automation and Systems, pp 1716–1719

  23. Giannarou S, Ye M, Gras G, Leibrandt K, Marcus HJ and Yang GZ (2016) Vision-based deformation recovery for intraoperative force estimation of tool–tissue interaction for neurosurgery. International Journal of Computer Assisted Radiology and Surgery, vol. 11, pp 929–936, 2016/06/01

  24. Aviles AI, Marban A, Sobrevilla P, Fernandez J and Casals A A recurrent neural network approach for 3D vision-based force estimation. In 2014 4th International Conference on Image Processing Theory, Tools and Applications (IPTA), pp 1–6

  25. Noohi E, Parastegari S, Žefran M (2014) Using monocular images to estimate interaction forces during minimally invasive surgery. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4297-4302

  26. Aviles AI, Alsaleh SM, Hahn JK, Casals A (2017) Towards retrieving force feedback in robotic-assisted surgery: a supervised neuro-recurrent-vision approach. IEEE Trans Haptics 10:431–443

    Article  Google Scholar 

  27. Marban A, Srinivasan V, Samek W, Fernández J and. Casals A (2019) A recurrent convolutional neural network approach for sensorless force estimation in robotic surgery. Biomedical Signal Processing and Control, vol. 50, pp. 134–150, 2019/04/01

  28. Gao C, Liu X, Peven M, Unberath M and Reiter A (2018) Learning to see forces: Surgical force prediction with rgb-point cloud temporal convolutional networks. In OR 2.0 Context-Aware Operating Theaters, Computer Assisted Robotic Endoscopy, Clinical Image-Based Procedures, and Skin Image Analysis, Cham, pp 118–127

  29. Mendizabal A, Sznitman R and Cotin S (2019) Force classification during robotic interventions through simulation-trained neural networks. International Journal of Computer Assisted Radiology and Surgery, vol. 14, pp 1601–1610, 2019/09/01

  30. Abeywardena S, Yuan Q, Tzemanaki A, Psomopoulou E, Droukas L, Melhuish C et al. (2019) Estimation of tool-tissue forces in robot-assisted minimally invasive surgery using neural networks. Frontiers in Robotics and AI, vol. 6, 2019-July-16

  31. Edwards PJE, Colleoni E, Sridhar A, Kelly JD and Stoyanov D (2021) Visual kinematic force estimation in robot-assisted surgery – application to knot tying. Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, vol. 9, pp 414–420, 2021/07/04

  32. Jung W-J, Kwak K-S, Lim S-C (2021) Vision-based suture tensile force estimation in robotic surgery. Sensors 21:110

    Article  Google Scholar 

  33. Chua Z, Jarc AM and Okamura AM (2021) Toward force estimation in robot-assisted surgery using deep learning with vision and robot state. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp 12335-12341

  34. Aviles AI, Alsaleh SM, Sobrevilla P and Casals A (2015) Force-feedback sensory substitution using supervised recurrent learning for robotic-assisted surgery. In 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp 1–4

  35. Zhang J, Zhong Y, Gu C (2018) Deformable models for surgical simulation: a survey. IEEE Rev Biomed Eng 11:143–164

    Article  Google Scholar 

  36. Krutikova O, Sisojevs A and Kovalovs M (2017) Creation of a depth map from stereo images of faces for 3D model reconstruction. Procedia Computer Science, vol. 104, pp 452–459, 2017/01/01

  37. Itseez (2021, 10.08.2021). Open source computer vision library (OpenCV). Available: https://opencv.org/

  38. Maier-Hein L, Mountney P, Bartoli A, Elhawary H, Elson D, Groch A et al. (2013) Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery. Medical Image Analysis, vol. 17, pp 974–996, 2013/12/01

  39. Stoyanov D (2012) Stereoscopic scene flow for robotic assisted minimally invasive surgery. In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2012, Berlin, Heidelberg, pp 479-486

  40. Liu H, Zhu Z, Yao L, Dong J, Chen S, Zhang X et al. Epipolar rectification method for a stereovision system with telecentric cameras. Optics and Lasers in Engineering, vol. 83, pp 99–105 2016/08/01

  41. Kamencay P, Breznan M, Jarina R and Lukac P (2011) Depth map computation using hybrid segmentation algorithm. In 2011 34th International Conference on Telecommunications and Signal Processing (TSP), pp 584–588

  42. Kamencay P, Breznan M, Jarina R, Lukac P and Zachariasova M Improved depth map estimation from stereo images based on hybrid method. Radioengineering, vol. 21, 04/01

  43. Dinh PH (2021) A novel approach based on Grasshopper optimization algorithm for medical image fusion. Expert Systems with Applications, vol. 171, p 114576, 2021/06/01

  44. Dinh PH (2021) An improved medical image synthesis approach based on marine predators algorithm and maximum Gabor energy. Neural Computing and Applications, 2021/10/22

  45. Antoni Buades BC, Morel JM (2011) Non-local means denoising. Image processing on-line, vol. 1, pp. 208–212, 13–09–2011

  46. Owens A, Isola P, McDermott J, Torralba A, Adelson E and Freeman W (2016) Visually indicated sounds

  47. Pfister T, Simonyan K, Charles J and Zisserman A (2014) Deep convolutional neural networks for efficient pose estimation in gesture videos

  48. LeCun YA, Bottou L, Orr GB and Müller KR (2012) Efficient BackProp. In Neural Networks: Tricks of the Trade: Second Edition, G. Montavon, G. B. Orr, and K.-R. Müller, Eds., ed Berlin, Heidelberg: Springer Berlin Heidelberg, pp 9–48

  49. Chung J, Gülçehre Ç, Cho K and Bengio Y (2014) Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR, vol. abs/1412.3555

  50. Greff K, Srivastava RK, Koutník J, Steunebrink BR, Schmidhuber J (2017) LSTM: A search space odyssey. IEEE Trans Neural Netw Learn Syst 28:2222–2232

    Article  MathSciNet  Google Scholar 

  51. Graves A and Schmidhuber J (2005) Framewise phoneme classification with bidirectional LSTM networks. In Proceedings. 2005 IEEE International Joint Conference on Neural Networks, vol. 4, pp. 2047–2052

  52. Sherstinsky A (2020) Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Physica D: Nonlinear Phenomena, vol. 404, p 132306, 2020/03/01

  53. Cho K, Merrienboer B, Gulcehre C, Bougares F, Schwenk H and Bengio Y (2014) Learning phrase representations using RNN encoder-decoder for statistical machine translation. 06/03

  54. Smith L (2017) Cyclical learning rates for training neural networks

  55. https://github.com/bckenstler/CLR.

  56. Papir YS, Hsu KH and Wildnauer RH (1975) The mechanical properties of stratum corneum: I. The effect of water and ambient temperature on the tensile properties of newborn rat stratum corneum. Biochimica et Biophysica Acta (BBA) - General Subjects, vol. 399, pp 170–180, 1975/07/14

  57. Foutz T, Stone E, Abrams CJ (1992) Effects of freezing on mechanical properties of rat skin. Vet Res 53:788–792

    Google Scholar 

  58. Wu KS, van Osdol WW and Dauskardt RH (2006) Mechanical properties of human stratum corneum: Effects of temperature, hydration, and chemical treatment. Biomaterials, vol. 27, pp 785–795, 2006/02/01

  59. Roche ET, Wohlfarth R, Overvelde JTB, Vasilyev NV, Pigula FA, Mooney DJ et al (2014) A bioinspired soft actuated material. Adv Mater 26:1200–1206

    Article  Google Scholar 

  60. Pacchierotti C (2015) Cutaneous haptic feedback in robotic teleoperation. Springer, Berlin, Germany

    Book  Google Scholar 

  61. Ranamukhaarachchi SA, Schneider T, Lehnert S, Sprenger L, Campbell JR, Mansoor et al. (2016) Development and validation of an artificial mechanical skin model for the study of interactions between skin and microneedles. Macromolecular Materials and Engineering, vol. 301, pp 306–314, 2016/03/01

  62. Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C et al. (2015) TensorFlow: Large-scale machine learning on heterogeneous distributed systems. arXiv:1603.04467

  63. Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S et al. (2014) Generative adversarial nets, presented at the Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, Montreal, Canada

  64. Kingma DP and Welling M (2014) Auto-encoding variational bayes, presented at the International Conference on Learning Representations

  65. Liu Y, Zhang R, Nie F, Li X, Ding C (2020) Supervised dimensionality reduction methods via recursive regression. IEEE Trans Neural Netw Learn Syst 31:3269–3279

    Article  MathSciNet  Google Scholar 

  66. Bahdanau D, Cho K and Bengio Y (2014) Neural machine translation by jointly learning to align and translate. ArXiv, vol. 1409, 09/01

  67. Zheng W, Liu H, Wang B and Sun F (2020) Cross-modal learning for material perception using deep extreme learning machine. International Journal of Machine Learning and Cybernetics, vol. 11, pp 813–823, 2020/04/01

  68. Zhang C, Dai Q and Song G (2020) DeepCascade-WR: a cascading deep architecture based on weak results for time series prediction. International Journal of Machine Learning and Cybernetics, vol. 11, pp 825–840, 2020/04/01

Download references

Acknowledgements

The authors favourably acknowledge Sri Ramachandra Medical College and Research Institute, MIT Campus-Anna University, Chennai, providing the opportunity and necessary facilities to execute the work efficiently. The authors also would like to endorse the University Grants Commission (UGC), which provided the funding to accomplish the work effectively and gratefully acknowledge all the people who contributed to this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to P. V. Sabique.

Ethics declarations

Conflict of interest

This study was funded by the University Grants Commission (F./2017–18/NFO-2017–18-OBC-KER-60500), Government of India.

Ethical approval

All applicable international, national, and Anna university guidelines for the care and use of animals were followed, and all the experiments were directed on the authority of strategies by the animal welfare board of India.

Informed consent

Informed consent was obtained from all individual participants included in the study. For this type of study, formal consent was not required.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 37 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sabique, P.V., Ganesh, P. & Sivaramakrishnan, R. Stereovision based force estimation with stiffness mapping in surgical tool insertion using recurrent neural network. J Supercomput 78, 14648–14679 (2022). https://doi.org/10.1007/s11227-022-04432-4

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-022-04432-4

Keywords

Navigation