{"title":"Irrelevant Locomotion Intention Detection for Myoelectric Assistive Lower Limb Robot Control","authors":"Xiaoyu Song;Jiaqing Liu;Heng Pan;Haotian Rao;Can Wang;Xinyu Wu","doi":"10.1109/TMRB.2025.3550736","DOIUrl":null,"url":null,"abstract":"In this study, we propose a robust myoelectric intention recognition framework to recognize human locomotion mode and detect irrelevant locomotion intention. The framework is integrated into the control system of the lower limb exoskeleton robot for experimental validation. Most conventional electromyography (EMG) intention detection methods aim to accurately detect the target motion intentions but ignore the possible effects of irrelevant intentions. In traditional action intention recognition strategies, most researchers did not consider entering irrelevant action intentions into the model during training. Therefore, when using a classification model, if irrelevant action intentions are input, the model will still recognize it as a type of target action intention. That can lead to incorrect recognition results, which will cause the robot to perform wrong movements and pose a safety risk to the wearer. To detect and reject irrelevant motion intentions, we first used the dual-purpose autoencoder-guided temporal convolution network (DA-TCN) to obtain discriminative features of the surface EMG signal. Autoencoders (AE)/Variable Autoencoders (VAE) are then trained for each of the seven deep features of the target motion intention. In addition, irrelevant motion intentions are detected according to the value of their reconstruction error. The recall rate of this method for the detection of irrelevant motion intentions exceeds 99% and the accuracy rate exceeds 99%.At the same time, we replaced the TCN with the LSTM model and compared the performance of the two after adding irrelevant motion discrimination. We collected data on seven goals and three unrelated motor intentions from seven experimenters for testing and completed an online experimental validation. The motion recognition accuracy of all the experimenters can be maintained above 86%.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 2","pages":"655-665"},"PeriodicalIF":3.8000,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on medical robotics and bionics","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10924298/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
In this study, we propose a robust myoelectric intention recognition framework to recognize human locomotion mode and detect irrelevant locomotion intention. The framework is integrated into the control system of the lower limb exoskeleton robot for experimental validation. Most conventional electromyography (EMG) intention detection methods aim to accurately detect the target motion intentions but ignore the possible effects of irrelevant intentions. In traditional action intention recognition strategies, most researchers did not consider entering irrelevant action intentions into the model during training. Therefore, when using a classification model, if irrelevant action intentions are input, the model will still recognize it as a type of target action intention. That can lead to incorrect recognition results, which will cause the robot to perform wrong movements and pose a safety risk to the wearer. To detect and reject irrelevant motion intentions, we first used the dual-purpose autoencoder-guided temporal convolution network (DA-TCN) to obtain discriminative features of the surface EMG signal. Autoencoders (AE)/Variable Autoencoders (VAE) are then trained for each of the seven deep features of the target motion intention. In addition, irrelevant motion intentions are detected according to the value of their reconstruction error. The recall rate of this method for the detection of irrelevant motion intentions exceeds 99% and the accuracy rate exceeds 99%.At the same time, we replaced the TCN with the LSTM model and compared the performance of the two after adding irrelevant motion discrimination. We collected data on seven goals and three unrelated motor intentions from seven experimenters for testing and completed an online experimental validation. The motion recognition accuracy of all the experimenters can be maintained above 86%.