Page 46 - Read Online
P. 46

Page 130                           Ding et al. Art Int Surg 2024;4:109-38  https://dx.doi.org/10.20517/ais.2024.16

                    view synthesis. Commun ACM 2022;65:99-106.  DOI
               49.       Li Z, Müller T, Evans A, et al. Neuralangelo: high-fidelity neural surface reconstruction. In: 2023 IEEE/CVF Conference on
                    Computer Vision and Pattern Recognition (CVPR); 2023 Jun 17-24; Vancouver, BC, Canada. IEEE; 2023. pp. 8456-65.  DOI
               50.       Allan M, Shvets A, Kurmann T, et al. 2017 robotic instrument segmentation challenge. arXiv. [Preprint.] Feb 21, 2019 [accessed
                    2024 Jul 2]. Available from: https://arxiv.org/abs/1902.06426.
               51.       Allan M, Kondo S, Bodenstedt S, et al. 2018 robotic scene segmentation challenge. arXiv. [Preprint.] Aug 3, 2020 [accessed 2024 Jul
                    2]. Available from: https://arxiv.org/abs/2001.11190.
               52.       Psychogyios D, Colleoni E, Van Amsterdam B, et al. Sar-rarp50: segmentation of surgical instrumentation and action recognition on
                    robot-assisted radical prostatectomy challenge. arXiv. [Preprint.] Jan 23, 2024 [accessed 2024 Jul 2]. Available from: https://arxiv.
                    org/abs/2401.00496.
               53.       Grammatikopoulou M, Flouty E, Kadkhodamohammadi A, et al. CaDIS: Cataract dataset for surgical RGB-image segmentation. Med
                    Image Anal 2021;71:102053.  DOI  PubMed
               54.       Cartucho J, Weld A, Tukra S, et al. SurgT challenge: benchmark of soft-tissue trackers for robotic surgery. Med Image Anal
                    2024;91:102985.  DOI  PubMed
               55.       Schmidt A, Mohareri O, DiMaio S, Salcudean SE. Surgical tattoos in infrared: a dataset for quantifying tissue tracking and mapping.
                    arXiv. [Preprint.] Feb 29, 2024 [accessed 2024 Jul 2]. Available from: https://arxiv.org/abs/2309.16782.
               56.       Roß T, Reinke A, Full PM, et al. Comparative validation of multi-instance instrument segmentation in endoscopy: Results of the
                    ROBUST-MIS 2019 challenge. Med Image Anal 2021;70:101920.  DOI  PubMed
               57.       Segstrong-C:  segmenting  surgical  tools  robustly  on  non-adversarial  generated  corruptions  -  an  EndoVis’24  challenge.
                    [Preprint.] Jul 16, 2024 [accessed 2024 Jul 18]. Available from: https://arxiv.org/abs/2407.11906.
               58.       Qin F, Lin S, Li Y, Bly RA, Moe KS, Hannaford B. Towards better surgical instrument segmentation in endoscopic vision: multi-
                    angle feature aggregation and contour supervision. IEEE Robot Autom Lett 2020;5:6639-46.  DOI
               59.       Hong WY, Kao CL, Kuo YH, Wang JR, Chang WL, Shih CS. Cholecseg8k: a semantic segmentation dataset for laparoscopic
                    cholecystectomy based on cholec80. arXiv. [Preprint.] Dec 23, 2020 [accessed 2024 Jul 2]. Available from: https://arxiv.org/abs/
                    2012.12453.
               60.       Jha D, Ali S, Emanuelsen K, et al. Kvasir-instrument: diagnostic and therapeutic tool segmentation dataset in gastrointestinal
                    endoscopy. In: Lokoč J, et al., editors. MultiMedia Modeling. Cham: Springer; 2021. pp. 218-29.  DOI
               61.       Wang Z, Lu B, Long Y, et al. AutoLaparo: a new dataset of integrated multi-tasks for image-guided surgical automation in
                    laparoscopic hysterectomy. In: Wang L, Dou Q, Fletcher PT, Speidel S, Li S, editors. Medical Image Computing and Computer
                    Assisted Intervention - MICCAI 2022. Cham: Springer; 2022. pp. 486-96.  DOI
               62.       Lin TY, Maire M, Belongie S, et al. Microsoft COCO: common objects in context. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T,
                    editors. Computer Vision - ECCV 2014. Cham: Springer; 2014. pp. 740-55.  DOI
               63.       Shao S, Li Z, Zhang T, et al. Objects365: a large-scale, high-quality dataset for object detection. In: 2019 IEEE/CVF International
                    Conference on Computer Vision (ICCV); 2019 Oct 27 - Nov 2; Seoul, Korea (South). IEEE; 2019. pp. 8429-38.  DOI
               64.       Zia A, Bhattacharyya K, Liu X, et al. Surgical tool classification and localization: results and methods from the miccai 2022
                    surgtoolloc challenge. arXiv. [Preprint.] May 31, 2023 [accessed 2024 Jul 2]. Available from: https://arxiv.org/abs/2305.07152.
               65.       Pfeiffer M, Funke I, Robu MR, et al. Generating large labeled data sets for laparoscopic image processing tasks using unpaired
                    image-to-image translation. In: Shen D, et al., editors. Medical Image Computing and Computer Assisted Intervention - MICCAI
                    2019. Cham: Springer; 2019. pp. 119-27.  DOI
               66.       Kirillov A, Mintun E, Ravi N, et al. Segment anything. arXiv. [Preprint.] Apr 5, 2023 [accessed 2024 Jul 2]. Available from: https://
                    arxiv.org/abs/2304.02643.
               67.       Ozyoruk KB, Gokceler GI, Bobrow TL, et al. EndoSLAM dataset and an unsupervised monocular visual odometry and depth
                    estimation approach for endoscopic videos. Med Image Anal 2021;71:102058.  DOI  PubMed
               68.       Allan M, Mcleod J, Wang C, et al. Stereo correspondence and reconstruction of endoscopic data challenge. arXiv. [Preprint.] Jan 28,
                    2021 [accessed 2024 Jul 2]. Available from: https://arxiv.org/abs/2101.01133.
               69.       Hamlyn centre laparoscopic/endoscopic video datasets. Available from: https://hamlyn.doc.ic.ac.uk/vision/. [Last accessed on 2 Jul
                    2024].
               70.       Recasens D, Lamarca J, Fácil JM, Montiel JMM, Civera J. Endo-depth-and-motion: reconstruction and tracking in endoscopic videos
                    using depth networks and photometric constraints. IEEE Robot Autom Lett 2021;6:7225-32.  DOI
               71.       Ali S, Pandey AK. ArthroNet: a monocular depth estimation technique with 3D segmented maps for knee arthroscopy. Intell Med
                    2023;3:129-38.  DOI
               72.       Masuda K, Shimizu T, Nakazawa T, Edamoto Y. Registration between 2D and 3D ultrasound images to track liver blood vessel
                    movement. Curr Med Imaging 2023;19:1133-43.  DOI  PubMed
               73.       Bobrow TL, Golhar M, Vijayan R, Akshintala VS, Garcia JR, Durr NJ. Colonoscopy 3D video dataset with paired depth from 2D-3D
                    registration. Med Image Anal 2023;90:102956.  DOI  PubMed  PMC
               74.       Lin B, Sun Y, Sanchez JE, Qian X. Efficient vessel feature detection for endoscopic image analysis. IEEE Trans Biomed Eng
                    2015;62:1141-50.  DOI  PubMed
               75.       JIGSAWS: the JHU-ISI gesture and skill assessment working set: JHU-ISI Gesture and skill assessment working set (JIGSAWS).
                    Available from: https://cirl.lcsr.jhu.edu/research/hmm/datasets/jigsaws_release/. [Last accessed on 2 Jul 2024].
   41   42   43   44   45   46   47   48   49   50   51