Page 36 - Read Online
P. 36

Magaribuchi et al. Mini-invasive Surg 2024;8:6  https://dx.doi.org/10.20517/2574-1225.2023.81  Page 5 of 8


























                Figure 2. Overview of the technique for overlaying the 3D model and the surgical view in AR. No direct patient identifiers are included in
                this image. AI: Artificial intelligence; CNN: convolutional neural network; ICP: iterative closest point; CPD: coherent point cloud; SfM:
                structure from motion; SGBM: semi-global block matching; Mask R-CNN: Mask Region-based CNN; FEM: finite element method; 3D:
                Three-dimensional; AR: augmented reality.

               Other registration innovations
               Despite the potential of fiducial-based navigation, a key drawback is the time-consuming correction process
               in case of significant misalignment. Without fiducial markers, a registration method utilizing affine
               transformations was proposed in 2013 to accommodate organ movement . Affine transformations
                                                                                  [45]
               manipulate geometric objects, enabling translation, rotation, scaling, and shearing. Yet, its major limitation
               is the inability to realize bending deformation. B-spline curves, capable of complex nonlinear deformation
               such as bending and twisting, have been used in some studies , but accuracy drops when deformation is
                                                                    [46]
               significant. Recent research has explored deep learning-based registration methods. In 2021, Jia et al.
               reported that using SiamMask, a deep learning method for object tracking and segmentation, they
               substantially improved SLAM accuracy and accommodated organ movement and deformation . In 2022,
                                                                                                [47]
               Padovan et al. presented a study using two convolutional neural networks (CNNs) to identify organ
                                                                                      [48]
               positions from RGB images captured by a camera and register these with a CT model .
               All the aforementioned 3D navigation techniques involve overlaying 3D models derived from CT onto 2D
               images. If the surgical scene could be reconstructed in 3D and then superimposed with the 3D CT-derived
               model, it would possibly lead to more precise navigation. However, the technical hurdles are high, and there
               are currently no reports of its effective use in actual surgeries. This section will overview research on PN
               navigation using 3D-3D registration.


               To perform 3D-3D registration, not only is it necessary to reconstruct the kidney model in 3D from CT
               images, but also to reconstruct the surgical scene in 3D. Furthermore, 3D point cloud registration methods
               differ from those of 2D-3D registration. The first method of 3D-3D navigation in PN was reported by Su
                   [49]
               et al. . They reconstructed the surgical scene in 3D from the disparity of a stereo camera and manually
               registered it with the 3D model of the kidney derived from CT using the Iterative Closest Point (ICP)
               algorithm. However, while ICP is a common 3D registration method, it is incomplete due to its requirement
               for the point clouds to be somewhat close together and its difficulty registering deformation models.
               Zampokas et al. used a strategy known as Quasi-dense matching, combining sparse and dense matching, to
               create a 3D point cloud in 2018 . A subsequent study in 2022 used the DynamicFusion method for the
                                           [50]
               registration of this 3D point cloud with the 3D kidney model . There has also been a report of extracting
                                                                   [51]
   31   32   33   34   35   36   37   38   39   40   41