Page 33 - Read Online
P. 33
Page 182 Chen et al. Intell Robot 2024;4:179-95 I http://dx.doi.org/10.20517/ir.2024.11
Figure 1. An overview of the proposed staircase shape feature extraction method. 3D: Three-dimensional; IMU: inertial measurement unit.
Figure 2. An example of how the depth camera is integrated into typical walking-aid robots, e.g., lower-limb prostheses.
feature extraction from staircases, even with constrained viewpoints. In this way, it helps mitigate the issues
associated with cumulative errors during point cloud alignment, improving overall performance.
The overview of this method is depicted in Figure 1. We acquire 3D point cloud data of the environment using
a depth camera mounted on walking-aid robots. These robots include powered prostheses [1,35] and lower-
limb exoskeletons [36] . To provide context, Figure 2 illustrates the integration of the depth camera into typical
walking-aid robots.
Once the raw 3D environmental point cloud, denoted as Camera 3D, is rotated by Equation 1 to align with the
ground coordinate system:
Ground Ground Camera
3D = Camera · 3D , (1)
where Ground 3D and Camera 3D are the point cloud in the ground coordinate system and camera coordinate
system, respectively, and Ground Camera istherotationmatrixfromthecameratothegroundcoordinatesystem,
which can be calculated from the Euler angles measured by the inertial measurement unit (IMU) attached to
the camera.
The rotated point cloud Ground 3D is then subjected to dimension reduction, as given in Equation 2, where
represents the total number of points in Ground 3D. is the set of indexes of the points of which the
coordinate (perpendicular to the human’s sagittal plane) is between -0.1 and 0.1 m. This subset extraction