Page 306 - Read Online
P. 306

Torabinia et al. Mini-invasive Surg 2021;5:32  https://dx.doi.org/10.20517/2574-1225.2021.63  Page 5 of 12

               Co-registration algorithms
               A key step in this system is to co-register the catheter and heart model in a single coordinate system. To this
               end, four metal spheres were embedded in our heart phantom model and used as fiducial markers. As
               shown in Figure 3A, the catheter and all four fiducial markers are visible in both of the bi-plane fluoroscopic
               images, such that they will be tracked and processed using the OpenCV library in Python. The OpenCV
               processing comprises Bitwise-Not operation, Smoothing operation, and Contours operation, illustrated in
               Figure 3B and C. Next, the radiopaque markers’ 2D coordinates are identified from both fluoroscopic
               images (RAO30°, LAO55°) and fed into the co-registration algorithms. Utilizing one of the radiopaque
               markers as a reference, the other coordinates will be offset. With the offset position of the fiducial marker
               and the known rotation angle, the 3D positions are solved from equation 3, as shown in Figure 3D. Then,
               the positions of four predefined fiduciary markers are used to calculate the affine transformation matrix in a
               single coordinate system using Eq. 4 and Eq. 5. The positions of four fiduciary markers are used to calculate
               the affine transformation matrix in a single coordinate system. Finally, the transformation matrix is applied
               to the position of the catheter’s tip, as retrieved from a U-Net model prediction, to be co-registered in the
               coordinate system.


               RESULT AND DISCUSSION
               Bi-plane co-registration accuracy
               To validate the accuracy of our 3D co-registration algorithm, we 3D printed a jig that holds an array of 50
               metal spheres at various heights, shown in Figure 4. Using the biplane C-arm, two fluoroscopic images from
               two different angles were acquired and processed as described in section 2.5. Finally, the absolute error for
               each sphere was determined based on the difference between the true value measured from the 3D CAD file
               and the calculated value from the processed bi-plane images using our co-registration algorithm. As can be
               seen from Figure 4C, the average accuracy was 0.12 ± 0.11 mm, which is highly accurate for cardiac
               interventions.


               Catheter tip detection
               The primary region of interest of a catheter during a procedure is its tip. Any intra-operative errors due to
               catheter tip maneuvering in the vascular system may raise the risk of puncture, embolization, or tissue
               damage [54,55] . As a result, we trained a deep learning U-Net model to detect the catheter tip's radiopaque
               marker in each frame of the fluoroscopic images. Figure 5 depicts the groundtruth and predicted
               segmentation of the catheter tip's radiopaque marker for the testing dataset. To evaluate the model
               performance, we used the area-based indexes to compare the predicted segmentation results with the
                                                                      [56]
               groundtruth. These indexes include the Dice coefficient (DSC) , Binary cross-entropy, and Intersection
               over Union (IOU) which can be found in Table 1. In order to improve the performance of the U-net model
                                                                                                       [54]
               over our datasets and avoid the overfitting training phase, we performed extensive data augmentation ,
               including random shifting, scaling, rotation, and brightness/contrast changes, shown in Figure 6.
               Throughout each augmentation experiment, the IOU for each image and the mean average for the entire
               testing datasets (60 images) were calculated. We found that the best performance occurred by applying 10
               random translations per image (±20 pixels), scaling with a zoom range of 0.1, 10 regular rotations per image,
               and random brightness and contrast of 0.5 resulting in 83.67% IOU. It should be noted that our reliable
               segmentation score (Dice of 0.8457 and IOU of 0.8367) resulted in an accuracy of (< 1 mm), which is far
               beyond the acceptable range for catheter tip tracking in cardiac applications.


               To highlight the deep learning segmentation task's accuracy and efficiency, we compared the performance
               of the U-Net architecture with some classical image processing techniques (i.e., Thresholding, Watershed,
               Find and draw Contours by OpenCV, etc.). The catheter's radiopaque marker's appearance is affected by
               partial occlusions, intensity saturation, and motion blur. As can be seen from Figure 5, and despite the
   301   302   303   304   305   306   307   308   309   310   311