Page 50 - Read Online
P. 50

Page 134                           Ding et al. Art Int Surg 2024;4:109-38  https://dx.doi.org/10.20517/ais.2024.16

                    [Last accessed on 3 Jul 2024].
               160.      Cheng HK, Schwing AG. XMem: long-term video object segmentation with an atkinson-shiffrin memory model. In: Avidan S,
                    Brostow G, Cissé M, Farinella GM, Hassner T, editors. Computer Vision - ECCV 2022. Cham: Springer; 2022. pp. 640-58.  DOI
               161.      Duke B, Ahmed A, Wolf C, Aarabi P, Taylor GW. SSTVOS: sparse spatiotemporal transformers for video object segmentation. In:
                    2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2021 Jun 20-25; Nashville, TN, USA. IEEE;
                    2021. pp. 5908-17.  DOI
               162.      Yang Z, Wei Y, Yang Y. Associating objects with transformers for video object segmentation. Available from: https://proceedings.
                    neurips.cc/paper/2021/hash/147702db07145348245dc5a2f2fe5683-Abstract.html. [Last accessed on 3 Jul 2024].
               163.      Cheng HK, Oh SW, Price B, Lee JY, Schwing A. Putting the object back into video object segmentation. arXiv. [Preprint.] Apr 11,
                    2024 [accessed 2024 Jul 3]. Available from: https://arxiv.org/abs/2310.12982.
               164.      Gong T, Chen K, Wang X, et al. Temporal ROI align for video object recognition. AAAI 2021;35:1442-50.  DOI
               165.      Wu H, Chen Y, Wang N, Zhang ZX. Sequence level semantics aggregation for video object detection. In: 2019 IEEE/CVF
                    International Conference on Computer Vision (ICCV); 2019 Oct 27 - Nov 2. IEEE; 2019. pp. 9216-24.  DOI
               166.      Zhu X, Wang Y, Dai J, Yuan L, Wei Y. Flow-guided feature aggregation for video object detection. In: 2017 IEEE International
                    Conference on Computer Vision (ICCV); 2017 Oct 22-29; Venice, Italy. IEEE; 2017. pp. 408-17.  DOI
               167.      Zhu X, Xiong Y, Dai J, Yuan L, Wei Y. Deep feature flow for video recognition. In: 2017 IEEE Conference on Computer Vision and
                    Pattern Recognition (CVPR). IEEE; 2017. pp. 4141-50.  DOI
               168.      Li B, Wu W, Wang Q, Zhang F, Xing J, Yan J. SiamRPN++: evolution of siamese visual tracking with very deep networks. In: 2019
                    IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2019 Jun 15-20; Long Beach, CA, USA. IEEE; 2019.
                    pp. 4277-86.  DOI
               169.      Yan B, Peng H, Fu J, Wang D, Lu H. Learning spatio-temporal transformer for visual tracking. In: 2021 IEEE/CVF International
                    Conference on Computer Vision (ICCV); 2021 Oct 10-17; Montreal, QC, Canada. IEEE; 2021. pp. 10428-37.  DOI
               170.      Cui Y, Jiang C, Wang L, Wu G. Mixformer: end-to-end tracking with iterative mixed attention. In: 2022 IEEE/CVF Conference on
                    Computer Vision and Pattern Recognition (CVPR); 2022 JUN 18-24; New Orleans, LA, USA. IEEE; 2022. pp. 13598-608.  DOI
               171.      Bergmann P, Meinhardt T, Leal-Taixe L. Tracking without bells and whistles. In: 2019 IEEE/CVF International Conference on
                    Computer Vision (ICCV); 2019 Oct 27 - Nov 2; Seoul, Korea (South). IEEE; 2019. pp. 941-51.  DOI
               172.      Pang J, Qiu L, Li X, et al. Quasi-dense similarity learning for multiple object tracking. In: 2021 IEEE/CVF Conference on Computer
                    Vision and Pattern Recognition (CVPR); 2021 Jun 20-25; Nashville, TN, USA. IEEE; 2021. pp. 164-73.  DOI
               173.      Zhang Y, Sun P, Jiang Y, et al. ByteTrack: Multi-object tracking by associating every detection box. In: Avidan S, Brostow G, Cissé
                    M, Farinella GM, Hassner T, editors. Computer Vision - ECCV 2022. Cham: Springer; 2022. pp. 1-21.  DOI
               174.      Luo W, Xing J, Milan A, Zhang X, Liu W, Kim T. Multiple object tracking: a literature review. Artif Intell 2021;293:103448.  DOI
               175.      Wang Y, Sun Q, Liu Z, Gu L. Visual detection and tracking algorithms for minimally invasive surgical instruments: a comprehensive
                    review of the state-of-the-art. Robot Auton Syst 2022;149:103945.  DOI
               176.      Dakua SP, Abinahed J, Zakaria A, et al. Moving object tracking in clinical scenarios: application to cardiac surgery and cerebral
                    aneurysm clipping. Int J Comput Assist Radiol Surg 2019;14:2165-76.  DOI  PubMed  PMC
               177.      Liu B, Sun M, Liu Q, Kassam A, Li CC, Sclabassi R. Automatic detection of region of interest based on object tracking in
                    neurosurgical video. Conf Proc IEEE Eng Med Biol Soc 2005;2005:6273-6.  DOI  PubMed
               178.      Du X, Allan M, Bodenstedt S, et al. Patch-based adaptive weighting with segmentation and scale (PAWSS) for visual tracking in
                    surgical video. Med Image Anal 2019;57:120-35.  DOI  PubMed  PMC
               179.      Stenmark M, Omerbašić E, Magnusson M, Andersson V, Abrahamsson M, Tran PK. Vision-based tracking of surgical motion during
                    live open-heart surgery. J Surg Res 2022;271:106-16.  DOI  PubMed
               180.      Cheng T, Li W, Ng WY, et al. Deep learning assisted robotic magnetic anchored and guided endoscope for real-time instrument
                    tracking. IEEE Robot Autom Lett 2021;6:3979-86.  DOI
               181.      Zhao Z, Voros S, Chen Z, Cheng X. Surgical tool tracking based on two CNNs: from coarse to fine. J Eng 2019;2019:467-72.  DOI
               182.      Robu M, Kadkhodamohammadi A, Luengo I, Stoyanov D. Towards real-time multiple surgical tool tracking. Comput Methods
                    Biomecha Biomed Eng Imaging Vis 2021;9:279-85.  DOI
               183.      Lee D, Yu HW, Kwon H, Kong HJ, Lee KE, Kim HC. Evaluation of surgical skills during robotic surgery by deep learning-based
                    multiple surgical instrument tracking in training and actual operations. J Clin Med 2020;9:1964.  DOI  PubMed  PMC
               184.      García-Peraza-Herrera LC, Li W, Gruijthuijsen C, et al. Real-time segmentation of non-rigid surgical tools based on deep learning
                    and tracking. In: Peters T, et al., editors. Computer-Assisted and Robotic Endoscopy. Cham: Springer; 2017. pp. 84-95.  DOI
               185.      Jo K, Choi Y, Choi J, Chung JW. Robust real-time detection of laparoscopic instruments in robot surgery using convolutional neural
                    networks with motion vector prediction. Appl Sci 2019;9:2865.  DOI
               186.      Zhao Z, Chen Z, Voros S, Cheng X. Real-time tracking of surgical instruments based on spatio-temporal context and deep learning.
                    Comput Assist Surg 2019;24:20-9.  DOI  PubMed
               187.      Alshirbaji TA, Jalal NA, Möller K. A convolutional neural network with a two-stage LSTM model for tool presence detection in
                    laparoscopic videos. Curr Dir Biomed Eng 2020;6:20200002.  DOI
               188.      Bouguet JY, Perona P. 3D photography using shadows in dual-space geometry. Int J Comput Vis 1999;35:129-49.  DOI
               189.      Iddan GJ, Yahav G. Three-dimensional imaging in the studio and elsewhere. Proc SPIE 2001;4298:48-55.  DOI
               190.      Nayar SK, Krishnan G, Grossberg MD, Raskar R. Fast separation of direct and global components of a scene using high frequency
                    illumination. ACM Trans Graph 2006;25:935-44.  DOI
   45   46   47   48   49   50   51   52   53   54   55