Page 35 - Read Online
P. 35

Page 184                           Chen et al. Intell Robot 2024;4:179-95  I http://dx.doi.org/10.20517/ir.2024.11































                                 Figure 3. Classification of typical 2D point cloud of staircases. 2D: Two-dimensional.


               As depicted in Figure 3, our workflow begins with utilizing the RANSAC algorithm to determine the number
               of stairsteps visible within the camera’s perspective. This initial step provides a crucial parameter, allowing us
               to gain insights into the staircase structure and layout.


               Subsequently, we employ a set of if-else rules to classify the captured point cloud data into the aforementioned
               seven distinct shapes. These rules consider the specific geometric characteristics of staircases observed from
               different camera perspectives. By adhering to these classification rules, we accurately categorize the extracted
               point cloud data, ensuring that each shape is precisely identified.


               Applying the RANSAC algorithm, in conjunction with the classification rules, is an important part of our
               approach. It not only facilitates the precise identification of corner points but also contributes significantly to
               theoverallaccuracyofthefeatureextractionprocess. Thislevelofprecisionisessentialforsubsequentanalyses
               and applications, enhancing the robustness and reliability of our method.


               Once the initial model is determined via RANSAC and the feature corner points are successfully extracted, we
               proceed with the KNN-augmented ICP [37]  for point cloud registration and estimation of the motion of the
               depth camera integrated into the walking-aid robot. The integration of RANSAC and the KNN-augmented
               ICP enables a two-tiered feature extraction and registration approach. Initially, RANSAC provides a robust es-
               timation of the staircase geometry, effectively handling outliers and incomplete data. Subsequently, KNN-ICP
               refines this estimation by ensuring that only the most relevant points are used for final alignment, enhancing
               the accuracy under dynamic conditions and restricted viewpoints.

               This process of the KNN-augmented ICP comprises the following key steps:


               KNN Point Selection: In KNN-ICP, we start by using the KNN algorithm to select the nearest neighbors for
               each feature point in point cloud    feature,t captured in timestep   , i.e., to match each feature point       in feature
               point cloud    feature,t with the nearest K points in point cloud    feature,t+1 captured in timestep    + 1. Since we are
               only interested in the nearest neighbor for each feature point for ICP alignment, the value of K is set to 1. This
               can be expressed as:
   30   31   32   33   34   35   36   37   38   39   40