Personalizing Human Gait Entrainment: A Reinforcement Learning Approach to Optimizing Magnitude of Periodic Mechanical Perturbations

IF 4.6 2区 计算机科学 Q2 ROBOTICS
Omik Save;Junmin Zhong;Suhrud Joglekar;Jennie Si;Hyunglae Lee
{"title":"Personalizing Human Gait Entrainment: A Reinforcement Learning Approach to Optimizing Magnitude of Periodic Mechanical Perturbations","authors":"Omik Save;Junmin Zhong;Suhrud Joglekar;Jennie Si;Hyunglae Lee","doi":"10.1109/LRA.2025.3561574","DOIUrl":null,"url":null,"abstract":"The feasibility of gait entrainment to periodic mechanical perturbations varies with perturbation magnitude in neurotypical individuals. Effective design of gait entrainment studies thus requires a systematic approach to personalize periodic perturbation parameters. However, current studies still rely on manually selecting the perturbation magnitude, a practice that is neither efficient nor optimal for individual users. This study proposes a new reinforcement learning (RL) method to personalize the minimum magnitude of periodic perturbation to hip flexion that ensures successful entrainment. The method entails offline learning and in situ adaptation (OLAP), where offline learning involves training a deep Q-network (DQN), which is subsequently used in situ to guide the adaptive selection of an optimal perturbation magnitude for individuals. This study recruited thirteen healthy participants, with entrainment characteristics data from seven participants used for offline DQN training. The remaining six participants performed in situ adaptation to identify their personalized optimal perturbation parameters. Results demonstrate that the OLAP agent effectively tailored a minimum perturbation magnitude for each of the six participants in the adaptation group, leveraging generalization from the DQN policy. All adaptation group participants achieved a 100% entrainment success rate at their personalized perturbation magnitude during a 3-trial post-evaluation session, highlighting the agent's effectiveness. The efficiency and robustness of our approach underscore its significance in designing future optimal gait entrainment studies for diverse population groups.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"5673-5680"},"PeriodicalIF":4.6000,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Robotics and Automation Letters","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10966021/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0

Abstract

The feasibility of gait entrainment to periodic mechanical perturbations varies with perturbation magnitude in neurotypical individuals. Effective design of gait entrainment studies thus requires a systematic approach to personalize periodic perturbation parameters. However, current studies still rely on manually selecting the perturbation magnitude, a practice that is neither efficient nor optimal for individual users. This study proposes a new reinforcement learning (RL) method to personalize the minimum magnitude of periodic perturbation to hip flexion that ensures successful entrainment. The method entails offline learning and in situ adaptation (OLAP), where offline learning involves training a deep Q-network (DQN), which is subsequently used in situ to guide the adaptive selection of an optimal perturbation magnitude for individuals. This study recruited thirteen healthy participants, with entrainment characteristics data from seven participants used for offline DQN training. The remaining six participants performed in situ adaptation to identify their personalized optimal perturbation parameters. Results demonstrate that the OLAP agent effectively tailored a minimum perturbation magnitude for each of the six participants in the adaptation group, leveraging generalization from the DQN policy. All adaptation group participants achieved a 100% entrainment success rate at their personalized perturbation magnitude during a 3-trial post-evaluation session, highlighting the agent's effectiveness. The efficiency and robustness of our approach underscore its significance in designing future optimal gait entrainment studies for diverse population groups.
个性化人类步态干扰:一种优化周期性机械扰动幅度的强化学习方法
在神经典型个体中,步态夹带对周期性机械扰动的可行性随扰动的大小而变化。因此,步态夹带研究的有效设计需要一种系统的方法来个性化周期摄动参数。然而,目前的研究仍然依赖于手动选择扰动幅度,这种做法既不有效,也不适合个人用户。本研究提出了一种新的强化学习(RL)方法,以个性化周期性扰动髋屈曲的最小幅度,确保成功夹持。该方法需要离线学习和原位适应(OLAP),其中离线学习涉及训练深度q -网络(DQN),该网络随后用于原位指导个体的最佳扰动幅度的自适应选择。本研究招募了13名健康参与者,其中7名参与者的娱乐特征数据用于离线DQN训练。其余六名参与者进行原位适应,以确定他们个性化的最佳扰动参数。结果表明,OLAP代理利用DQN策略的泛化,有效地为适应组中的六个参与者中的每个参与者量身定制了最小扰动幅度。在3个试验的评估后阶段,所有适应组的参与者在他们个性化的扰动量级上都达到了100%的引导成功率,突出了代理的有效性。我们的方法的效率和鲁棒性强调了它在设计未来不同人群的最佳步态训练研究中的重要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Robotics and Automation Letters
IEEE Robotics and Automation Letters Computer Science-Computer Science Applications
CiteScore
9.60
自引率
15.40%
发文量
1428
期刊介绍: The scope of this journal is to publish peer-reviewed articles that provide a timely and concise account of innovative research ideas and application results, reporting significant theoretical findings and application case studies in areas of robotics and automation.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信