Page 32 - Read Online
P. 32

Page 2 of 6               Poulos et al. Mini-invasive Surg. 2025;9:6  https://dx.doi.org/10.20517/2574-1225.2024.42

               customized to solve unique clinical problems in all stages of cancer care. From a screening perspective, AI
               has been employed to supplement thorough endoscopic esophageal exams in order to identify premalignant
                                     [4]
               and malignant conditions . The utilization of machine learning (ML) with Barrett’s esophagus is in its
               infancy, but the growth of this application is imperative to avoid missed high-risk lesions and interval
               cancers. As more surgeons adopt the robotic-assisted minimally invasive esophagectomy (RAMIE)
               technique, AI is increasingly integrated into the operating room to reduce the surgical learning curve,
               enhance safety and efficiency, and improve postoperative outcomes.  In this review, we will explore the
               current applications for AI in the realm of robotic esophageal surgery, which, while still in its infancy, has
               shown great potential as a tool for esophageal surgeons.


               SCREENING AND PREOPERATIVE DECISION MAKING
               Screening and early detection of esophageal premalignant lesions is imperative for early endoscopic and
               surgical treatments. AI has been applied to diagnostic upper endoscopies in aiding endoscopists in early
               detection and screening for premalignant and malignant esophageal diseases. Applications include
               computer-aided detection based on endoscopic images, deep learning algorithms of histologic specimens,
               and real-time video analysis. These AI algorithms demonstrate high sensitivity in identifying high-risk
               esophageal lesions and may enhance the traditional esophageal exam using high-definition white light and
                                  [5,6]
               narrow band imaging . Miss rates for esophageal adenocarcinoma and Barrett’s esophagus with existing
               biopsy strategies are estimated to be as high as > 20% and 50%, respectively. Narrow band imaging and more
               advanced imaging techniques such as chromoendoscopy have improved diagnostic accuracy for
               endoscopists.  However, they require greater expertise . Deep learning algorithms have shown promise as a
                                                             [6]
               novel adjunct to endoscopists in identifying high-risk esophageal lesions.

               Computer-aided detection systems trained on large subsets of white light endoscopy images were able to
               identify Barrett’s esophagus with exceptionally high sensitivity and specificity with near-perfect localization
                          [7-9]
               of the disease . A similar approach has been applied using ANN analysis of endoscopic videos and yielded
               similar detection rates of esophageal dysplasia . ML algorithms applied to histologic specimen slides can
                                                       [10]
               identify and differentiate non-dysplastic Barrett’s esophagus, low-grade dysplasia, and high-grade dysplasia
               with > 90% sensitivity and specificity [11,12] . Similar approaches have been applied to the detection of
               malignancy with some neural network systems shown to detect gastroesophageal junction cancer on
               traditional white light endoscopy images with 66% accuracy, compared to an accuracy of 63% when
               analyzed by board-certified expert endoscopists . AI-based tools have also been employed to ensure
                                                          [13]
               quality and consistency during screening endoscopy by providing automatic image capture and blind-spot
               recognition during screening routine endoscopy. In a randomized controlled trial, 153 patients were
               randomized to include real-time quality improvement system during routine screening endoscopy
               compared to 150 routine controls. In the experimental group, the quality of screening endoscopy was
               significantly improved including an approximate 15% reduction in blind spot rate [95% confidence interval
               (CI) -19.23 to -11.54] . This quality improvement system, called WISENSE, employed deep convolutional
                                 [14]
               neural networks (DCNN) and deep reinforcement learning (DRL) trained on over 34,000 EGD images to
               classify gastric images into specific sites. Training this model involved first testing DCNN on still images
               and eventually integrating DCNN and DRL for testing on real EGD videos. After this initial training
               process, the system was formally tested in a randomized control trial.

               As medical care has become increasingly individualized, AI has been integrated into the medical decision-
               making process for staging, prognosticating and treating premalignant and malignant esophageal lesions.
               Current estimations for progression of Barrett’s esophagus and surveillance intervals have been based on
               previously published large studies. In part due to interobserver variability in endoscopic and histologic
   27   28   29   30   31   32   33   34   35   36   37