Page 51 - Read Online
P. 51

Page 14 of 23  George et al. Mini-invasive Surg 2024;8:4  https://dx.doi.org/10.20517/2574-1225.2023.102



                    Testing: 122 images                                 and 97.58% on CVC-
                                                                        ClinicDB databases
                                                                        respectively
 [77]
 Lei et al.  Polyps and   2023  Combined   United   Study is proposed to determine  Study is incomplete  CNN  Study is incomplete
 tumours  prospective/retrospective  Kingdom  efficacy of AI tools for polyp
 detection in capsule endoscopy

 AI: Artificial intelligence; KNN: K nearest neighbour; MLP: multilayer perceptron; SVM: support vector machine; BC: binary classifier; WCE: wireless capsule endoscopy; SSAEIM: stacked sparse autoencoder with
 image manifold constraint; TI: turbid image; CI: clear image; CCE: colon capsule endoscopy; CNN: convolutional neural network; LCDH: local colour difference; GMM: gaussian mixture model; SSMD: single shot
 multibox detector; KID: koulaouzidis-iakovidis database; DBMF: dual branch multiscale feature fusion network; GI: gastrointestinal.


 Current commercial endoscopes have some algorithm built to assist with interpretation. However, the training of such algorithms are based on traditional

 supervised learning methods. Given the rise in higher resolution and increase the amount of training images and videos, unsupervised methods will be more
 efficient and accurate.



 Deep learning has shown significant promise in the field of diagnostic capsule endoscopy due to its ability to learn from large volumes of data and make
 accurate predictions. Current commercial capsule endoscopes have algorithms available to assist with interpretation such as the TOP 100 feature of Rapid
 Reader . However, the training of these algorithms is based on traditional supervised learning methods. Unlike traditional machine learning algorithms,
 [45]
 which require manual feature extraction and selection, deep learning ones can automatically learn and extract features from raw data . CNNs, in particular,
                                                               [108]
 are designed to automatically and adaptively learn spatial hierarchies of features from raw data, which makes them well-suited for image classification tasks in
 capsule endoscopy, as evidenced in the studies above. Given the rise in image resolution and amount of training images and video, unsupervised methods

 capitalising on these AI systems will become even more efficient and accurate in future.


 Despite the advantages of deep learning, it is not without its pitfalls. One of its main criticisms is the “black box” problem. Due to the complexity and depth of

 these models, it can be challenging to understand and interpret how they make their predictions. This lack of transparency and interpretability can be
                                                          [109]
 problematic in medical applications, where understanding the reasoning behind a diagnosis is crucial for patient care and trust . The “black box” problem
 also raises concerns about the reliability and fairness of deep learning models. If the reasoning behind a model’s prediction is not clear, determining whether

 the model is making decisions based on relevant features or whether it is being influenced by irrelevant or biased data can be difficult . This is an intrinsic
                                                                 [109]
 issue with deep learning, and hence, images must be validated prospectively prior to usage in clinical settings. Currently, AI researchers are exploring a concept
 known as Explainable AI to help understand the logic and decision-making process within a black box.



 When training WCE with AI, “images” obtained may not be histologically verified due to an inability to obtain biopsies without invasive enteroscopy. This
 issue undoubtedly has implications for the reliability of the AI algorithms due to the potential inaccuracy of the training dataset used. This may adversely affect

 the diagnostic accuracy, causing either false-positive or false-negative results, both of which have significant clinical implications. The issue of data quality can
   46   47   48   49   50   51   52   53   54   55   56