Page 85 - Read Online
P. 85

Wang et al. Art Int Surg. 2025;5:465-75  https://dx.doi.org/10.20517/ais.2025.03     Page 471

                                                       [37]
               To prove long-term stability of RPNIs, Vu et al.  used a NB classifier to decode hand postures in real time
               and offline. Participants were cued to a specific posture and asked to volitionally mirror the posture with
               their phantom limb. EMG data from only the RPNIs were used to train a NB classifier. The cue hand would
               instruct participants to perform a specific posture, and they had to match it using a separate virtual hand.
               The accuracy of the classifier was quantified by the number of correct predictions, whereas the speed of the
               classifier was measured by calculating the time between the EMG onset and the first correct predicted
               output from the classifier. They were able to decode five different finger postures in each subject, both
               offline and in real time, using RPNI signals. When the classifier was trained using both RPNI and residual
               muscle signals, researchers were able to decode four different grasping postures. Subjects successfully
               controlled a hand prosthesis in real time up to 300 days without control algorithm recalibration, showing
               the potential of RPNIs for clinical translation. These findings highlight the long-term signal stability of
               RPNIs, which is the result of stable NMJs and robust vascularization over time. These features collectively
               support durable signal amplitude and decoding accuracy over extended periods, reinforcing the role of
               RPNIs as a reliable interface for neural prosthetic control.

               Clinical integration
               By combining novel surgical techniques with machine learning algorithms, researchers have achieved more
               intuitive, precise, and stable prosthetic control. Patients can control prosthetic devices more naturally by
               simply thinking about the desired movement. This intuitive control reduces the cognitive load on users and
               makes prosthetic use more seamless in everyday life. The ability to distinguish between multiple hand
               postures enables the use of advanced multi-articulated prosthetic hands. Patients can perform a wider range
               of movements, including individual finger control, enhancing the functionality of their prosthetic devices.
               The system can be used in virtual environments for patient training and rehabilitation. This allows patients
               to practice and improve their control over the prosthetic device in a safe, controlled setting before using it in
               daily life. The ability to decode multiple finger postures and grasping patterns enables personalized
               prosthetic programming. This customization can be tailored to each patient’s specific needs and lifestyle
               requirements. The precise control offered by RPNI-based pose identification can be integrated into
               occupational therapy programs, helping patients regain independence in work-related tasks and potentially
               facilitating return to employment. To address individual patient variability and signal drift in clinical
               applications, adaptive algorithms have been proposed. For example, Kalman filters can be tuned to
               individual patients by incorporating user-specific calibration data during initial training, followed by
               periodic recalibration sessions to update model parameters [7,38,39] . Alternatively, using adaptive or extended
               Kalman filters allows the algorithm to adjust to changing signal dynamics in real time [40,41] . Machine learning
               models such as HMM-NB can be trained with small amounts of user-specific data and improved
               incrementally with user feedback ]. These approaches aim to reduce the burden of frequent recalibration
                                           [42
               and improve the long-term usability of prosthetic systems in daily life. As research in this field progresses,
               the integration of machine learning with RPNI technology provides a way to bridge the gap between
               biological intent and prosthetic action, potentially revolutionizing the field of neuroprosthetics. For clarity,
               we summarized important aspects of each machine learning algorithm in an overview table [Table 1].

               CONCLUSION
               The use of machine learning has made significant and positive impacts on the field of upper limb
               rehabilitation. New possibilities now exist for enhancing prosthetic control and functionality by decoding
               complex EMG signals and translating them into natural prosthetic movements for more intuitive control of
               prosthetic devices. This has not only allowed for multi-articulated movements but also improved user
               experiences. Despite these advancements, several challenges remain, including the need for more robust
               algorithms, improved long-term stability of neural interfaces, and enhanced sensory feedback mechanisms.
               Future research should focus on developing more sophisticated machine learning algorithms that can adapt
   80   81   82   83   84   85   86   87   88   89   90