Page 42 - Read Online
P. 42
Chen et al. Intell Robot 2024;4:179-95 I http://dx.doi.org/10.20517/ir.2024.11 Page 191
4.2 Potentials for real-time control of walking-aid robots
The proposed method is promising for integration into real-time control of walking-aid robots. Specifically,
when applied to a powered transfemoral prosthesis, it substantially improves the prosthesis’s environmental
awareness, particularly during stair climbing activities. By leveraging our proposed feature extraction method
alongside forward kinematics modeling, the prosthesis comprehensively understands its whole-body position
within the environment. This is critical for navigating the complex geometries of staircases with greater preci-
sion and safety.
Moreover, the ability to estimate the prosthesis’s translational velocity through our feature extraction and ICP
method adds another layer of advantage to its control system. This velocity estimation, combined with the
positional awareness provided by the feature extraction, allows for the deployment of Kalman filters to predict
the prosthesis’s pose with an accurate state transition equation. This prediction capability is essential for im-
plementing model predictive control, enabling the prosthesis to move freely and efficiently on stairs, avoiding
collisions with stair risers and ensuring a smooth locomotion experience for the user.
The efficiency of our method is underscored by the average time consumption of each iteration of feature
extraction and ICP, recorded at around 6 and 3 ms during our experiments. Given that the swing phase of a
human’s gait cycle lasts approximately 800 ms [10,38] , the processing speed of our approach is well within the
requirements for real-time model predictive control of walking-aid robots. This demonstrates the feasibility of
integrating our method into the control systems of such walking-aid devices without introducing latency that
could compromise operational efficiency or safety.
While this paper does not focus on real-time control, the implications of our findings for this application
are profound. Integrating our proposed method with the control systems of walking-aid robots represents a
promising direction for future research. In subsequent work, we plan to delve deeper into this integration,
aiming to showcase the full potential of our method in enhancing the autonomy, adaptability, and safety of
walking-aid robots in real-world scenarios.
4.3 Practical application scenarios
One notable application scenario for our method is in residential and public buildings where staircases vary
widely in design and complexity. It enables walking-aid robots to accurately identify and navigate these stair-
cases, adjusting to different angles, widths, and materials. For instance, in a multi-floor home, a robot could
assist individuals by safely guiding them up and down stairs, adapting its real-time movements to avoid obsta-
cles and optimize safety. This method can help enhance the autonomy of the walking-aid robot by providing it
with a detailed understanding of its surroundings and the positions of its joints in the global coordinate system
when moving on stairs; thus, the robot can (1) identify the start and end points of staircases; (2) calculate the
safest path to avoid collisions using artificial potential field, etc., and keep balance using whole-body control,
etc.; (3) dynamically adjust its joint movements based on its own motion state and the staircase’s geometry
detected through our feature extraction method; and (4) autonomously navigate between staircases using op-
timal control [39] , etc. Such adaptability is crucial for ensuring the robot can operate independently, without
constant human supervision, thereby improving the efficiency of assistance provided to patients.
The proposed method can also be applied in outdoor scenarios. Compared to depth cameras based on struc-
tured light (such as Intel RealSense and Microsoft Kinect V1) or stereo vision methods, the ToF camera used
in this work is more resistant to external light interference [40] and can acquire the 3D environmental point
cloud in front of the camera under outdoor lighting conditions.