Christopher K Bitikofer, Eric T Wolbrecht, Rene M Maura, Joel C Perry
{"title":"Comparison of Admittance Control Dynamic Models for Transparent Free-Motion Human-Robot Interaction.","authors":"Christopher K Bitikofer, Eric T Wolbrecht, Rene M Maura, Joel C Perry","doi":"10.1109/ICORR58425.2023.10304709","DOIUrl":null,"url":null,"abstract":"<p><p>This paper presents an experimental comparison of multiple admittance control dynamic models implemented on a five-degree-of-freedom arm exoskeleton. The performance of each model is evaluated for transparency, stability, and impact on point-to-point reaching. Although ideally admittance control would render a completely transparent environment for physical human-robot interaction (pHRI), in practice, there are trade-offs between transparency and stability-both of which can detrimentally impact natural arm movements. Here we test four admittance modes: 1) Low-Mass: low inertia with zero damping; 2) High-Mass: high inertia with zero damping; 3) Velocity-Damping: low inertia with damping; and 4) a novel Velocity-Error-Damping: low inertia with damping based on velocity error. A single subject completed two experiments: 1) 20 repetitions of a single reach-and-return, and 2) two repetitions of reach-and-return to 13 different targets. The results suggest that the proposed novel Velocity-Error-Damping model improves transparency the most, achieving a 70% average reduction of vibration power vs. Low-Mass, while also reducing user effort, with less impact on spatial/temporal accuracy than alternate modes. Results also indicate that different models have unique situational advantages so selecting between them may depend on the goals of the specific task (i.e., assessment, therapy, etc.). Future work should investigate merging approaches or transitioning between them in real-time.</p>","PeriodicalId":73276,"journal":{"name":"IEEE ... International Conference on Rehabilitation Robotics : [proceedings]","volume":"2023 ","pages":"1-6"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE ... International Conference on Rehabilitation Robotics : [proceedings]","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICORR58425.2023.10304709","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper presents an experimental comparison of multiple admittance control dynamic models implemented on a five-degree-of-freedom arm exoskeleton. The performance of each model is evaluated for transparency, stability, and impact on point-to-point reaching. Although ideally admittance control would render a completely transparent environment for physical human-robot interaction (pHRI), in practice, there are trade-offs between transparency and stability-both of which can detrimentally impact natural arm movements. Here we test four admittance modes: 1) Low-Mass: low inertia with zero damping; 2) High-Mass: high inertia with zero damping; 3) Velocity-Damping: low inertia with damping; and 4) a novel Velocity-Error-Damping: low inertia with damping based on velocity error. A single subject completed two experiments: 1) 20 repetitions of a single reach-and-return, and 2) two repetitions of reach-and-return to 13 different targets. The results suggest that the proposed novel Velocity-Error-Damping model improves transparency the most, achieving a 70% average reduction of vibration power vs. Low-Mass, while also reducing user effort, with less impact on spatial/temporal accuracy than alternate modes. Results also indicate that different models have unique situational advantages so selecting between them may depend on the goals of the specific task (i.e., assessment, therapy, etc.). Future work should investigate merging approaches or transitioning between them in real-time.