Page 32 - Read Online
P. 32
Chen et al. Intell Robot 2024;4:179-95 I http://dx.doi.org/10.20517/ir.2024.11 Page 181
the environment by perceiving relative motion between frames [31] . Utilizing this information can guide the
walking-aid robot to achieve more robust movement in complex environments. The above process often in-
volves the following steps:
[1]
• Data Acquisition: Use sensors such as depth camera and LiDAR [18] to obtain point cloud data from the
environment.
• Feature Extraction: Extract key features from the point cloud [31,32] , such as surface features or corners, for
matching in subsequent steps.
• Point Cloud Registration Algorithm: Employ registration algorithms to align feature points collected at
different times or positions [33,34] , creating a unified environment map and estimating the relative motion
between adjacent frames by comparing their features.
In specific scenarios, systematically extracting corner points or straight lines from point cloud for registration
can reduce computational costs and mitigate the risk of erroneous matching due to shape similarities. The
core of our study is an improved feature extraction method that enables these robots to better understand
and interact with their surroundings, especially staircases. Our previous work concentrated on extracting
convex corners from staircases in point cloud. Although the method was effective in many scenarios, we
identified certain limitations, particularly in situations with restricted perspectives, leading to errors in point
cloud alignment. Our current research addresses these issues, presenting an innovative approach that offers
more robust and accurate feature extraction, even in challenging viewpoints.
This paper outlines our novel method, beginning with acquiring 3D point cloud data using depth cameras
mounted on walking-aid robots. We delve into the specifics of transforming this data into a two-dimensional
(2D) representation and the subsequent steps for feature extraction. Our approach is comprehensive, con-
sidering various camera perspectives and incorporating both convex and concave corners in the extraction
process. We employ advanced algorithms such as RANSAC and K-Nearest Neighbors (KNN)-augmented
Iterative Closest Point (ICP) to enhance its accuracy and efficiency.
The main contribution of this paper is that it introduces a robust method for extracting staircase shape fea-
tures from point cloud. This method significantly improves upon previous techniques by accurately identify-
ing featured corner points in staircases, even under restricted viewpoints and fast movement scenarios. This
enhancement addresses the limitations of earlier methods and ensures more reliable and robust feature extrac-
tion. By integrating the RANSAC algorithm and the KNN-augmented ICP method, the paper presents an
improved performance for point cloud registration. This advancement significantly enhances the efficiency
and robustness of point cloud processing in walking-aid robots when traversing through stairs.
This work represents a step forward in assistive robots in complex terrains. Improving the perception capabil-
ities of walking-aid robots, it aims to contribute to safer and more reliable walking assistance for individuals,
thereby enhancing the independence and well-being of individuals with mobility challenges.
2. METHODS
In this section, we introduce our novel approach for extracting staircase shape features. In our earlier work [31] ,
ourfeatureextractionmethodfocusedsolelyonextractingconvexcornersasfeaturepointsfromeachstaircase
in point cloud. However, we encountered limitations in certain scenarios where feature points could not be
reliably extracted due to restricted perspectives. These limitations led to cumulative errors when performing
frame-to-frame point cloud alignment.
To address these challenges and enhance the performance of staircase feature extraction and subsequent point
cloud registration, we present an innovative method. This method aims to provide more robust and accurate

