Abstract
Recording surgery in operating rooms is one of the essential tasks for education and evaluation of medical treatment. However, recording the fields which depict the surgery is difficult because the targets are heavily occluded during surgery by the heads or hands of doctors or nurses. We use a recording system which multiple cameras embedded in the surgical lamp, assuming that at least one camera is recording the target without occlusion. In this paper, we propose Conditional-BARF (C-BARF) to generate occlusion-free images by synthesizing novel view images from the camera, aiming to generate videos with smooth camera pose transitions. To the best of our knowledge, this is the first work to tackle the problem of synthesizing a novel view image from multiple images for the surgery scene. We conduct experiments using an original dataset of three different types of surgeries. Our experiments show that we can successfully synthesize novel views from the images recorded by the multiple cameras embedded in the surgical lamp.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Byrd, R.J., Ujjin, V.M., Kongchan, S.S., Reed, H.D.: Surgical lighting system with integrated digital video camera. uS Patent 6,633,328, 14 October 2003
Davis, A., Levoy, M., Durand, F.: Unstructured light fields. In: Computer Graphics Forum, vol. 31, pp. 305–314. Wiley Online Library (2012)
Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M.F.: The lumigraph. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 43–54 (1996)
Hachiuma, R., Shimizu, T., Saito, H., Kajita, H., Takatsume, Y.: Deep selection: a fully supervised camera selection network for surgery recordings. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12263, pp. 419–428. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59716-0_40
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Kumar, A.S., Pal, H.: Digital video recording of cardiac surgical procedures. Ann. Thorac. Surg. 77(3), 1063–1065 (2004)
Levoy, M., Hanrahan, P.: Light field rendering. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 31–42 (1996)
Lin, C.H., Ma, W.C., Torralba, A., Lucey, S.: BARF: bundle-adjusting neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5741–5751 (2021)
Matsumoto, S., et al.: Digital video recording in trauma surgery using commercially available equipment. Scand. J. Trauma Resuscitation Emerg. Med. 21(1), 1–5 (2013)
Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 405–421. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_24
Murala, J.S., Singappuli, K., Swain, S.K., Nunn, G.R.: Digital video recording of congenital heart operations with “surgical eye’’. Ann. Thorac. Surg. 90(4), 1377–1378 (2010)
Nair, A.G., et al.: Surgeon point-of-view recording: using a high-definition head-mounted video camera in the operating room. Indian J. Ophthalmol. 63(10), 771 (2015)
Paszke, A., et al.: Automatic differentiation in pytorch. In: NIPS-W (2017)
Sadri, A., Hunt, D., Rhobaye, S., Juma, A.: Video recording of surgery to improve training in plastic surgery. J. Plast. Reconstr. Aesthetic Surg. 66(4), e122–e123 (2013)
Schönberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Schönberger, J.L., Zheng, E., Frahm, J.-M., Pollefeys, M.: Pixelwise view selection for unstructured multi-view stereo. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 501–518. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46487-9_31
Shimizu, T., Oishi, K., Hachiuma, R., Kajita, H., Takatsume, Y., Saito, H.: Surgery recording without occlusions by multi-view surgical videos. In: VISIGRAPP (5: VISAPP), pp. 837–844 (2020)
Wang, Z., Wu, S., Xie, W., Chen, M., Prisacariu, V.A.: NeRF: neural radiance fields without known camera parameters. arXiv preprint arXiv:2102.07064 (2021)
Yen-Chen, L., Florence, P., Barron, J.T., Rodriguez, A., Isola, P., Lin, T.Y.: iNeRF: inverting neural radiance fields for pose estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2021)
Acknowledgement
We would like to express our gratitude to Yusuke Sekikawa, Denso IT Laboratory, Japan. Without his kind advice, this work would not have been completed. We also would like to thank the reviewers for their valuable comment. This work was supported by MHLW Health, Labour, and Welfare Sciences Research Grants Research on Medical ICT and Artificial Intelligence Program Grant Number 20AC1004, the MIC/SCOPE 201603003, and JSPS KAKENHI Grant Number 22H03617.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Masuda, M., Saito, H., Takatsume, Y., Kajita, H. (2022). Novel View Synthesis for Surgical Recording. In: Mukhopadhyay, A., Oksuz, I., Engelhardt, S., Zhu, D., Yuan, Y. (eds) Deep Generative Models. DGM4MICCAI 2022. Lecture Notes in Computer Science, vol 13609. Springer, Cham. https://doi.org/10.1007/978-3-031-18576-2_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-18576-2_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-18575-5
Online ISBN: 978-3-031-18576-2
eBook Packages: Computer ScienceComputer Science (R0)