Page 92 - Read Online
P. 92
Page 198 Wei et al. Art Int Surg 2024;4:187-98 I http://dx.doi.org/10.20517/ais.2024.12
In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2022 Oct 23-27; Kyoto, Japan. IEEE; 2022. pp.
8072–7. DOI
12. Colleoni E, Edwards P, Stoyanov D. Synthetic and real inputs for tool segmentation in robotic surgery. In: Medical Image Computing
and Computer Assisted Intervention - MICCAI 2020. Springer, Cham; 2020. pp. 700–10. DOI
13. Long Y, Wu JY, Lu B, et al. Relational graph learning on visual and kinematics embeddings for accurate gesture recognition in robotic
surgery. In: 2021 IEEE International Conference on Robotics and Automation (ICRA); 2021 May 30 - Jun 05; Xi’an, China. IEEE; 2021.
pp. 13346–53. DOI
14. van Amsterdam B, Funke I, Edwards E, et al. Gesture recognition in robotic surgery with multimodal attention. IEEE Trans Med Imaging
2022;41:1677–87. DOI
15. Mildenhall B, Srinivasan PP, Tancik M, Barron JT, Ramamoorthi R, Ng R. NeRF: representing scenes as neural radiance fields for view
synthesis. Commun ACM 2021;65:99–106. DOI
16. Niemeyer M, Geiger A. GIRAFFE: representing scenes as compositional generative neural feature fields. In: 2021 IEEE/CVF Conference
on Computer Vision and Pattern Recognition (CVPR); 2021 Jun 20-25; Nashville, USA. IEEE; 2021. pp. 11448–59. DOI
17. Deng K, Liu A, Zhu JY, Ramanan D. Depth-supervised NeRF: fewer views and faster training for free. In: 2022 IEEE/CVF Conference
on Computer Vision and Pattern Recognition (CVPR); 2022 Jun 18-24; New Orleans, USA. IEEE; 2022. pp. 12872–81. DOI
18. Wei Y, Liu S, Rao Y, Zhao W, Lu J, Zhou J. NerfingMVS: guided optimization of neural radiance fields for indoor multi-view stereo. In:
2021 IEEE/CVF International Conference on Computer Vision (ICCV); 2021 Oct 10-17; Montreal, Canada. IEEE; 2021. pp. 5590-9. DOI
19. Rematas K, Liu A, Srinivasan P, et al. Urban radiance fields. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition
(CVPR); 2022 Jun 18-24; New Orleans, USA. IEEE; 2022. pp. 12922–32. DOI
20. Wang Y, Long Y, Fan SH, Dou Q. Neural rendering for stereo 3D reconstruction of deformable tissues in robotic surgery. In: Medical
Image Computing and Computer-Assisted Intervention - MICCAI 2022. Springer, Cham; 2022. pp. 431–41. DOI
21. Kajiya JT, Von Herzen BP. Ray tracing volume densities. ACM SIGGRAPH Comput Gr 1984;18:165–74. DOI
22. Lee JH, Han MK, Ko DW, Suh IH. From big to small: multi-scale local planar guidance for monocular depth estimation. arXiv. [Preprint]
Sep 23, 2021 [accessed on 2024 Aug 7]. Available from: https://doi.org/10.48550/arXiv.1907.10326.
23. Valentin J, Kowdle A, Barron JT, et al. Depth from motion for smartphone AR. ACM T Graphic 2018;37:1–19. DOI
24. Curless B, Levoy M. A volumetric method for building complex models from range images. In: Proceedings of the 23rd annual conference
on Computer graphics and interactive techniques. Association for Computing Machinery; 1996. pp. 303–12. DOI
25. Lorensen WE, Cline HE. Marching cubes: a high resolution 3D surface construction algorithm. In: Seminal graphics: pioneering efforts
that shaped the field. Association for Computing Machinery; 1998. pp. 347–53. DOI
26. Allan M, Mcleod J, Wang C, et al. Stereo correspondence and reconstruction of endoscopic data challenge. arXiv. [Preprint] Jan 28, 2021
[accessed on 2024 Aug 7]. Available from: https://doi.org/10.48550/arXiv.2101.01133.
27. Li Z, Dekel T, Cole F, et al. Learning the depths of moving people by watching frozen people. In: 2019 IEEE/CVF Conference on
Computer Vision and Pattern Recognition (CVPR); 2019 Jun 15-20; Long Beach, USA. IEEE; 2019. pp. 4516–25. DOI
28. Eigen D, Puhrsch C, Fergus R. Depth map prediction from a single image using a multi-scale deep network. arXiv. [Preprint] Jun 9, 2014
[accessed on 2024 Aug 7]. Available from: https://doi.org/10.48550/arXiv.1406.2283.
29. Zhou T, Brown M, Snavely N, Lowe DG. Unsupervised learning of depth and ego-motion from video. In: 2017 IEEE Conference on
Computer Vision and Pattern Recognition (CVPR); 2017 Jul 21-26; Honolulu, USA. IEEE; 2017. pp. 6612-9. DOI
30. Ozyoruk KB, Gokceler GI, Bobrow TL, et al. EndoSLAM dataset and an unsupervised monocular visual odometry and depth estimation
approach for endoscopic videos. Med Image Anal 2021;71:102058. DOI
31. Lu Y, Wei R, Li B, et al. Autonomous intelligent navigation for flexible endoscopy using monocular depth guidance and 3-D shape
planning. In: 2023 IEEE international conference on robotics and automation (ICRA); 2023 May 29 - Jun 02; London, UK. IEEE; 2023.
p. 1–7. DOI
32. Prendergast JM, Formosa GA, Fulton MJ, Heckman CR, Rentschler ME. A real-time state dependent region estimator for autonomous
endoscope navigation. IEEE Trans Robot 2021;37:918–34. DOI
33. Cheng X, Zhong Y, Harandi M, Drummond T, Wang Z, Ge Z. Deep laparoscopic stereo matching with transformers. In: Medical Image
Computing and Computer-Assisted Intervention - MICCAI 2022. Springer, Cham; 2022. pp. 464–74. DOI
34. Zhou H, Jagadeesan J. Real-time dense reconstruction of tissue surface from stereo optical video. IEEE Trans Med Imaging 2020;39:400–
12. DOI
35. Wang J, Suenaga H, Hoshi K, et al. Augmented reality navigation with automatic marker-free image registration using 3-D image overlay
for dental surgery. IEEE Trans Biomed Eng 2014;61:1295–304. DOI
36. Leonard S, Sinha A, Reiter A, et al. Evaluation and stability analysis of video-based navigation system for functional endoscopic sinus
surgery on in vivo clinical data. IEEE Trans Med Imaging 2018;37:2185–95. DOI
37. Recasens D, Lamarca J, Fácil JM, Montiel JMM, Civera J. Endo-depth-and-motion: reconstruction and tracking in endoscopic videos
using depth networks and photometric constraints. IEEE Robot Autom Lett 2021;6:7225–32. DOI
38. Mahmoud N, Collins T, Hostettler A, Soler L, Doignon C, Montiel JMM. Live tracking and dense reconstruction for handheld monocular
endoscopy. IEEE Trans Med Imaging 2019;38:79–89. DOI

