Page 36 - Read Online
P. 36
Chen et al. Intell Robot 2024;4:179-95 I http://dx.doi.org/10.20517/ir.2024.11 Page 185
KNN( ) = { }. (3)
Here, is the th point in point cloud feature,t , KNN( ) represents finding the nearest point in point cloud
feature,t+1 matched with , and indicates the nearest neighbor point.
ICP Transformation Optimization: Once the KNN matching is done, we proceed to optimize the transfor-
mation between the matched points using the ICP algorithm. The objective of ICP is to find a transformation
matrix that maps the points from two sequential point clouds while minimizing the distance between them.
ICP typically uses the least squares method to minimize the error, which can be expressed as:
∑
2
min || ( ) − || . (4)
=1
Here, is the transformation matrix, is the number of points in point cloud feature,t , is the th point in
point cloud feature,t , and is the nearest point in point cloud feature,t+1 matched with . The iteration of ICP
begins with an initial guess of the transformation matrix. At each iteration, for each point in the source point
cloud feature,t , we find the closest point in the target point cloud feature,t+1 . The transformation matrix is
updated using the least squares method and applied to the source point cloud. The iteration process continues
until the transformations converge (Δ ≤ th , where Δ is the change in the displacement component of the
transformation matrix in each iteration and th is the threshold for convergence, taken as 10 ) or until the
−6
maximum number of iterations (taken as 20) is reached to obtain the final transformation.
Trajectory Estimation: Utilizing the derived transformations between point clouds captured at sequential
time steps, we can effectively estimate the camera motion trajectory by aggregating the frame-to-frame dis-
placements of the point clouds. In detail, the camera motion is reversed to that of the point cloud it cap-
tured. Assuming the sequentially derived transformations between point clouds from timestep 1 to are
( , ), ∈ (1, 2, ..., ), the accumulated trajectory of the camera is then calculated by:
∑ ∑
camera = {(− 1 , − 1 ), (− 1 − 2 , − 1 − 2 ), ...(− , − ))}. (5)
=1 =1
The trajectory of the robot can be further calculated from camera and the forward kinematic model of the
robot. This trajectory information proves invaluable for gaining insights into the dynamic movements of both
the camera and the walking-aid robot as they navigate the environment over an extended period.
Following these steps, we effectively extract staircase shape features and register the point clouds from different
frames. The pseudocode of the proposed algorithm can be expressed as Algorithm 1. The code and some
evaluation datasets have been released at https://github.com/Seas00n/MPV_2024.git.
3. PERFORMANCE EVALUATION
3.1 Comparisons with existing methods
In this section, we evaluate our presented staircase shape feature extraction method and draw comparisons
with our prior approach [31] , which focuses solely on extracting convex corner points, and also a widely-used
and state-of-the-art method – the Open3D built-in ICP algorithm. To assess the performance of our feature
extraction method, we employ an indirect evaluation metric, that is, the absolute trajectory error between the