Page 64 - Read Online
P. 64

Page 158                                                      Kimbowa et al. Art Int Surg 2024;4:149-69  https://dx.doi.org/10.20517/ais.2024.20

                                      [85]
               efficiently detect the needle . Object detection can also be performed directly for the needle tip instead of
                             [86]
               the entire needle .
               DISCUSSION
               Various methods and approaches have been developed to enhance needle alignment, visualization, and
               localization in ultrasound. Currently, all methods that are aimed at improving needle alignment are
               hardware-based [Figure 2], and require additional hardware which increases the cost of the ultrasound
               systems and also disrupts normal workflow. The same applies to the hardware-based needle visualization
               methods. This can be a big challenge, especially in resource-constrained communities such as rural and
               remote settings, and low- and middle-income countries that can not afford additional hardware.

               On the other hand, software-based methods do not require additional hardware and could be a potential
               alternative in such scenarios. Classical image-based methods for needle visualization and localization rely
               heavily on carefully engineered feature extractors and classifiers which are often not robust to various image
               acquisition settings and image quality. Learning-based methods address this challenge by automatically
               learning the feature extractor and/or classifier from existing data. Deep learning-based methods exhibit
               superior performance compared to classical methods, and thus, this discussion will mainly focus on the
               most recent deep learning-based methods for needle visualization and localization, summarized in Table 1
               and detailed in Table 2.


               The challenge with learning-based methods is that they require a lot of data to be trained. This data can be
               collected from tissue-mimicking phantoms, freshly excised animal cadavers, or in vivo during clinical
               procedures. Most methods use data collected by performing needle insertions in vitro with phantoms, and
               ex vivo with porcine, bovine, and chicken while mimicking clinical scenarios [Table 1]. Only methods
               developed for HDR prostate brachytherapy consistently use human in vivo data for evaluation. Future
               methods can find motivation from Gillies et al., who evaluated their approach on in vivo datasets from
               multiple organs and scenarios on top of the phantom datasets .
                                                                   [89]
               The data are typically annotated by an expert sonographer who performed the needle insertion experiments
               to obtain the ground truth labels. In some scenarios, a hardware-based tracking system is used to obtain a
               more accurate needle tip location, especially for cases where the needle is imperceptible to the human
               eye [54,66,88] . In all the proposed approaches, local datasets were collected and this is not ideal for comparing
               the proposed methods as noise and biases can easily be introduced into the data. To date, there is no
               benchmark dataset on which developed methods can be evaluated, which has significantly stifled
               progress .
                      [11]

               The typical evaluation metric for learning-based methods is needle tip error, as the ultimate goal for needle
               localization is to avoid puncture of critical tissue such as veins along the needle trajectory. For segmentation
               methods that also detect the needle shaft, needle trajectory/orientation error is an important metric, on top
               of the needle tip error, to assess model performance for needle guidance during insertion. Needle
               localization performance has progressed over the years, in terms of needle tip localization error, up until
               2022, when there seems to be a decrease in progress [Figure 4]. However, needle orientation error is also
               used to ensure that a large portion of the needle shaft is also accurately detected. Out of all the proposed
               methods, deep learning-based methods that report both tip localization and orientation error achieve state-
               of-the-art performance [Figure 4B] [76,78,83] . Another key metric for software-based methods is inference time
               on central processing unit (CPU), given that when deployed, these algorithms should achieve real-time
                                                                                  [14]
               performance, which is considered to be any processing speed greater than 16 fps .
   59   60   61   62   63   64   65   66   67   68   69