Page 61 - Read Online
P. 61
Kimbowa et al. Art Int Surg 2024;4:149-69 https://dx.doi.org/10.20517/ais.2024.20 Page 155
[6]
ultrasound beam is perpendicular to the needle to maximize specular reflection . The steered images are
then spatially compounded to form a single image in which the needle is more conspicuous. Cheung and
Rholing developed an algorithm that automatically adjusts the steering angle in linear 2D ultrasound
[51]
arrays . The challenge with beam steering is that it does not localize the needle and also has a limited
steering angle that makes needle enhancement challenging for steeper and deeper insertions .
[52]
Classical image processing methods
The challenges faced by image acquisition methods inspired exploration into image processing methods
that were independent of beam-steering. Classical image processing methods follow a workflow involving:
image preprocessing, feature extraction, needle detection from the features, and postprocessing to localize
the needle [Figure 3]. In the image processing step, the input ultrasound images are transformed using a
preprocessor P with corresponding parameters s. Early image processing approaches involved modeling
ultrasound signal transmission to estimate signal loss due to attenuation . However, this approach only
[53]
works for in-plane insertion. This challenge can be overcome using digital subtraction of consecutive
frames or optical flow methods to capture subtle motion changes even when the needle is
[54]
[55]
imperceptible. Image processing for 3D ultrasound volumes could involve transforming the volume into
appropriate views to normalize deformed object representations inherent in 3D ultrasound transducers .
[56]
After image processing, features are extracted using a handcrafted feature extractor E parameterized in λ .
1
Classical feature extractors can be categorized into two: intensity-based feature extractors and phase-based
feature extractors. Intensity-based feature extractors use edge-detection methods to localize the needle, for
instance, Ayvali and Desai used a circular Hough Transform to directly localize the tip of a hollow needle in
2D ultrasound . The challenge with intensity-based features is that they depend on the visibility of the
[57]
[57]
needle and assume the needle to be the brightest object within the image . Such features quickly fail in the
presence of other high-intensity artifacts such as tissue close to bone. This led to the adoption of scale and
rotation invariant features extractors such as Gabor filters, log-Gabor filters, and Histogram of Oriented
Gradients (HOG) [56,58-60] .
Based on the extracted features, the input image is segmented by the handcrafted decoder D parameterized
in λ to obtain a binary map indicating which pixels are likely to be the needle. To localize the needle in the
2
[61]
[61]
segmentation map, postprocessing approaches such as the RANSAC , Kalman filter , Hough
[53]
Transform [52-57] , and Radon Transform are used to fit a line through the segmentation map to obtain the
needle trajectory. The needle tip is then determined as the pixel with the highest intensity at the distal end of
the estimated trajectory. However, this naive approach is not robust to high-intensity artifacts along the
estimated trajectory, and methods such as Maximum Likelihood Estimation Sample Consensus (MLESAC)
[62]
can be used to determine the most likely pixel corresponding to the needle tip .
Generally, image processing methods can take as input either a single ultrasound image [53,63] , two
consecutive images [54,61,64] or a video stream of ultrasound images [52,57,59] . Algorithms that rely only on a single
image to make a prediction are generally more suitable for needle tip verification, such as in needle ablation
procedures. Nevertheless, they can be successively applied on a stream of ultrasound images to facilitate
needle guidance toward a target in real time . However, algorithms that rely on a stream of images before
[65]
making a prediction are more suited for needle guidance as they benefit from the temporal information
encoded within the image stream. These algorithms have the potential to model needle motion during
insertion, making them good candidates for needle guidance [55,66] .

