Page 31 - Read Online
P. 31
Page 180 Chen et al. Intell Robot 2024;4:179-95 I http://dx.doi.org/10.20517/ir.2024.11
1. INTRODUCTION
Understandingtheenvironmentiscrucialforrobotstotraversethroughcomplexterrains, especiallyinuneven
terrains involving stairs [1–3] . Accurately perceiving and recognizing the shape feature of staircases remains a
significant challenge for walking-aid robots, significantly influencing their autonomy and safety. This paper
presents an innovative and robust staircase shape feature extraction method to enhance the environment per-
ception abilities of walking-aid robots.
In recent years, the development of robotic systems has been increasingly focused on assisting individuals in
variousdailyactivities [4–7] , notablyinmobilityassistance [8–21] . Walking-aidrobotshaveemergedasimportant
tools, assisting individuals with limited mobility to walk in various terrains. However, their seamless traver-
sal of stairs still requires improvement. The ability of these robots to help people successfully navigate stairs
depends on their accurate detection and negotiation of staircases, emphasizing the importance of efficient
environment perception.
Plenty of research has been conducted using images or point cloud on staircase detection. Earlier approaches,
such as those by Cong et al. [22] and Murakami et al. [23] , utilized image-based methods, including edge detec-
tion on RGB images. However, these methods present limitations in low-light conditions and lack of depth in-
formation. Pointcloud,oftencollectedfromLightDetectionandRanging(LiDAR)sensorsanddepthcameras,
has been utilized to overcome these issues. The depth point cloud helps estimate accurately the geometry and
location of staircases, which is crucial for navigation. Various techniques, including variations of the Random
Sample Consensus (RANSAC) algorithm, have been used extensively for plane segmentation and staircase de-
tection [24] . Recentadvancementsinstairdetectioninvolveadeeplearning-basedend-to-endmethod, treating
stair line detection as a multitask involving coarse-grained semantic segmentation and object detection. This
strategy has shown high accuracy and speed, significantly outperforming previous methods [25] . Open3D [26] ,
an open-source three-dimensional (3D) data processing library, has also been widely applied to process stair-
case point cloud [27] . However, implementing existing staircase recognition methods on walking-aid robots
presents significant challenges. These challenges stem from the limited computational capacity of such robots
and the dynamic nature of their movement alongside human users. This movement often results in unpre-
dictable fluctuations in the captured images or point cloud. Additionally, the camera position on these robots
at lower altitudes leads to a restricted field of view, further complicating the recognition process. These factors
collectively pose difficulties in effectively deploying staircase recognition technologies in walking-aid robots.
Therefore, related work on these robots usually focused on simple geometry size recognition of the environ-
ment structure. The references [28] and [29] proposed using a depth camera to measure the size of obstacles
and the distance between the exoskeleton and obstacles to determine whether the obstacle is crossable. Based
on these assessments, the exoskeleton adapts by switching between predefined operational modes accordingly.
The reference [30] proposed to reduce the dimension of the depth point cloud of the environment to reduce
computational burden and recognize the shape parameters of the stairs (e.g., the height and width of the stair-
case) by the RANSAC algorithms. A significant drawback of existing methods is the lack of global positioning
and motion state information. The ability to adapt based on predefined operational modes does not compen-
sate for the robot’s inability to understand its position within a larger environmental context. This limitation
makes it hard to execute predictive and adaptive control on robots which is essential for advanced navigation
and safety strategies. It reduces the robot’s effectiveness in complex or dynamically changing environments.
In light of these limitations, our research seeks to address these gaps by offering a more comprehensive method
for environmental perception. By focusing on advanced feature extraction techniques and point cloud regis-
tration algorithms, our method aims to enhance the robot’s ability to perceive, understand, and adapt to its
surroundings. This includes improving global positioning and motion state awareness, which are crucial for
predictive and adaptive control. We have conducted preliminary work to construct an environmental map
through point cloud registration algorithms and achieve real-time self-awareness of the robot’s position in