Page 105 - Read Online
P. 105

Page 161                                                           Choksi et al. Art Int Surg. 2025;5:160-9  https://dx.doi.org/10.20517/ais.2024.84

               Results: A total of 102 videos from 27 surgeons were evaluated using 3-fold cross-validation, 51 videos for the
               backhand suturing task and 51 videos for the railroad suturing task. Performance was assessed on sub-stitch
               classification accuracy, technical score accuracy, and surgeon proficiency prediction. The clip-based Video Swin
               Transformer models achieved an average classification accuracy of 70.23% for sub-stitch classification and 68.4%
               for technical score prediction on the test folds. Combining the model outputs, the Random Forest Classifier
               achieved an average accuracy of 66.7% in predicting surgeon proficiency.

               Conclusion: This study shows the feasibility of creating a DL-based automatic assessment tool for robotic-assisted
               surgery. Using machine learning models, we predicted the proficiency level of a surgeon with 66.7% accuracy. Our
               dry lab model proposes a standardized training and assessment tool for suturing tasks using computer vision.
               Keywords: Automatic surgical skill assessment, computer vision, surgical education, simulation




               INTRODUCTION
               Deep learning (DL) models, particularly those in Computer vision (CV), have rapidly advanced over the last
               five years. CV, a form of artificial intelligence (AI), enables machines to recognize and interpret images
               using DL algorithms. In recent years, CV models have been increasingly applied in the healthcare field.
               Given the large amount of video-based data, minimally invasive surgery remains an apt field for applying
               these CV models. In the last few years, CV in minimally invasive surgery has already been able to achieve
                                                                 [1-4]
               task segmentation, object detection, and gesture recognition .

               Technical skill assessment remains integral to surgical training. Surgical performance in the operating room
                                            [5]
               is key to good patient outcomes . Current technical skill evaluation such as video-based assessment
               remains time-consuming and provides subjective, sometimes unactionable feedback. Due to this,
               constructing a framework for automated skills assessment is of the utmost importance. In open surgery,
               other methods, such as tracking hand movement, have also been utilized for automatic assessment.
               Grewal et al. utilized an inertial measurement unit to collect data on hand movements to automatically
                                                                                            [6,7]
               assess surgical skills, while Azari et al. assessed surgical skills using CV on hand movements .
               Robot-assisted surgery is becoming widespread. However, currently, there is no validated, universal robot
               training curriculum or assessment for surgical trainees. Fundamentals of laparoscopic surgery (FLS) has
               been created for the training and assessment of surgical residents in laparoscopy. Residents are required to
                                                            [8]
               pass the FLS assessment before being board-eligible . Accreditation for robotic surgery, however, lacks
               standardization and is based largely on case experience, leading to a large amount of variability in robotic
               training. Multiple international studies have made attempts to advance robotic curriculums; however, there
               is still a need for standardization, especially with rapidly developing technologies [9-12] . Past studies have
               worked on creating automatic assessment algorithms using CV for FLS. Lazar et al. utilized CV to
                                                                                                  [13]
               differentiate between experts and novices performing the Peg Transfer task on the FLS trainer , while
               Islam et al. designed a video-based system capable of providing task-specific feedback during the FLS
               task .
                   [14]

               However, for robotic-assisted surgery, limited automatic assessment tools exist. Ma et al. have been able to
               create an algorithm to automatically provide feedback for robotic suturing [15,16] . These studies are the first to
               provide automatic assessment and feedback of robotic suturing in simulation and dry lab models for
               vesicourethral anastomosis.
   100   101   102   103   104   105   106   107   108   109   110