Page 35 - Read Online
P. 35

Page 6 of 23             George et al. Mini-invasive Surg 2024;8:4  https://dx.doi.org/10.20517/2574-1225.2023.102

               Erosion and ulcers
               Erosions and ulcers are among the most common findings on WCE. These lesions have reduced visual
               features compared to visibly haemorrhagic lesions, as seen above, and hence, their characterisation is more
               difficult. Earlier work, as demonstrated by Charisis et al., utilising Bi-dimensional Ensemble Empirical
                                                                                                       [23]
               Mode Decomposition and SVMs to identify ulcers obtained a sensitivity and specificity of around 95% .
               While other MLP and SVM models were created prior to 2014 with similar accuracies [24-26] , the earliest study
               utilising a deep learning framework for the detection of ulcers and erosions is believed to be the work by
               Fan et al. in 2018, which employed a CNN achieving a sensitivity of 96.80% and 94.79% and specificity of
               94.79% and 95.98%, respectively . Since 2018, only two non-deep learning models were retrieved [28,29]  in
                                           [27]
               comparison to 14 deep learning models [30-42] . Most recently, in 2023, Nakada et al. published their use of the
               RetinaNet model to diagnose multiple types of lesions including erosions, ulcers, vascular lesions, and
               tumours . This study obtained a sensitivity of 91.9% and specificity of 93.6% in the detection of erosions
                      [43]
               and ulcers [Table 2].

               Vascular lesions and angiodysplasias
               Angiodysplasias, defined as accumulations of dilated, tortuous, and dilated blood vessels in the mucosa and
               submucosa of the intestinal wall, are common pathologies that can cause small intestinal bleeding. The first
               record of a software tool for the diagnosis of enteric lesions, including angiodysplasias, was the work by Gan
               et al. in 2008, which used Image Processing Software to obtain a median sensitivity of 74.2% . Only two
                                                                                               [44]
               non-deep learning models were retrieved in the search: a study by Arieira et al. on evaluating the accuracy
               of the TOP 100 feature of Rapid Reader™  and a 2019 investigation by Vieira et al. on MLP and SVMs
                                                   [45]
               which obtained sensitivities above 96% . Since 2019, only deep learning models have been employed in this
                                                [46]
               field [47-53] . In  2018,  Leenhardt  et  al.  published  their  CNN  model  for  detecting  gastrointestinal
               angiodysplasias . An exceptional sensitivity of 100% and specificity of 95.8% were obtained. Moreover,
                            [54]
               they assisted in constructing a French national database (CAD-CAP) to collect and maintain high-quality
               capsule endoscopy images for the training and validation of AI assistive tools. Recently, in 2023, Chu et al.
               published their CNN constructed on Resnet-50 architecture, which obtained a positive predictive value of
               94% and negative predictive value of 98%, in addition to the capability of segmenting and recognising an
                           [53]
               image in 0.6 s  [Table 3].
               Polyps and tumours
               The significance of detecting polyps and tumours stems from their potential to cause significant morbidity
               and mortality. A substantial body of research has been devoted to exploring AI-assisted capsule endoscopy
               for accurate identification and detection of these lesions. Early research in AI-assisted capsule endoscopy for
               this application includes a study by Li et al. in 2011, which utilised colour texture features to differentiate
               between normal and tumour-containing images with a sensitivity of 92.33% and a specificity of 88.67% .
                                                                                                       [55]
               Multiple other machine learning models utilising Binary Classifiers, SVMs, and MLPs have been utilised to
               varying accuracies and efficacies [56-61] . Deep learning was integrated into the field with the study by Yuan and
                           [62]
               Meng in 2017 , where they utilised a stacked sparse autoencoder method to categorise images into polyps,
               bubbles, turbid images, and clear images with an overall accuracy of 98.00%. Since then, 12 deep learning
               applications were used for polyp and tumour detection [63-74] . More recently, a study by Lafraxo et al. in 2023
               proposed an innovative model using CNN (Resnet50), where they achieved an accuracy of 99.16% on the
                                        [73]
               MICCAI 2017 WCE dataset . In 2022, the research conducted by Piccirelli et al. investigating the
               diagnostic accuracy of Express View of IntroMedic achieved a 97% sensitivity and 100% specificity . As AI
                                                                                                  [75]
               polyp detection tools are commercially available for colonoscopy, such as FujiFilm’s CADeye  and
                                                                                                    [76]
               EndoBRAIN (Olympus), the imminent release and usage of AI tools for capsule endoscopy is expected with
               these promising results, which will likely only be further supported by future research such as the planned
               multi-centre CESCAIL study  [Table 4].
                                       [77]
   30   31   32   33   34   35   36   37   38   39   40