一种在真实环境中检测人类运动意图的方法。

Yi-Xing Liu, Zhao-Yuan Wan, Ruoli Wang, Elena M Gutierrez-Farewik
{"title":"一种在真实环境中检测人类运动意图的方法。","authors":"Yi-Xing Liu, Zhao-Yuan Wan, Ruoli Wang, Elena M Gutierrez-Farewik","doi":"10.1109/ICORR58425.2023.10304774","DOIUrl":null,"url":null,"abstract":"<p><p>Accurate and timely movement intention detection can facilitate exoskeleton control during transitions between different locomotion modes. Detecting movement intentions in real environments remains a challenge due to unavoidable environmental uncertainties. False movement intention detection may also induce risks of falling and general danger for exoskeleton users. To this end, in this study, we developed a method for detecting human movement intentions in real environments. The proposed method is capable of online self-correcting by implementing a decision fusion layer. Gaze data from an eye tracker and inertial measurement unit (IMU) signals were fused at the feature extraction level and used to predict movement intentions using 2 different methods. Images from the scene camera embedded on the eye tracker were used to identify terrains using a convolutional neural network. The decision fusion was made based on the predicted movement intentions and identified terrains. Four able-bodied participants wearing the eye tracker and 7 IMU sensors took part in the experiments to complete the tasks of level ground walking, ramp ascending, ramp descending, stairs ascending, and stair descending. The recorded experimental data were used to test the feasibility of the proposed method. An overall accuracy of 93.4% was achieved when both feature fusion and decision fusion were used. Fusing gaze data with IMU signals improved the prediction accuracy.</p>","PeriodicalId":73276,"journal":{"name":"IEEE ... International Conference on Rehabilitation Robotics : [proceedings]","volume":"2023 ","pages":"1-6"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Method of Detecting Human Movement Intentions in Real Environments.\",\"authors\":\"Yi-Xing Liu, Zhao-Yuan Wan, Ruoli Wang, Elena M Gutierrez-Farewik\",\"doi\":\"10.1109/ICORR58425.2023.10304774\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Accurate and timely movement intention detection can facilitate exoskeleton control during transitions between different locomotion modes. Detecting movement intentions in real environments remains a challenge due to unavoidable environmental uncertainties. False movement intention detection may also induce risks of falling and general danger for exoskeleton users. To this end, in this study, we developed a method for detecting human movement intentions in real environments. The proposed method is capable of online self-correcting by implementing a decision fusion layer. Gaze data from an eye tracker and inertial measurement unit (IMU) signals were fused at the feature extraction level and used to predict movement intentions using 2 different methods. Images from the scene camera embedded on the eye tracker were used to identify terrains using a convolutional neural network. The decision fusion was made based on the predicted movement intentions and identified terrains. Four able-bodied participants wearing the eye tracker and 7 IMU sensors took part in the experiments to complete the tasks of level ground walking, ramp ascending, ramp descending, stairs ascending, and stair descending. The recorded experimental data were used to test the feasibility of the proposed method. An overall accuracy of 93.4% was achieved when both feature fusion and decision fusion were used. Fusing gaze data with IMU signals improved the prediction accuracy.</p>\",\"PeriodicalId\":73276,\"journal\":{\"name\":\"IEEE ... International Conference on Rehabilitation Robotics : [proceedings]\",\"volume\":\"2023 \",\"pages\":\"1-6\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE ... International Conference on Rehabilitation Robotics : [proceedings]\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICORR58425.2023.10304774\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE ... International Conference on Rehabilitation Robotics : [proceedings]","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICORR58425.2023.10304774","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在不同运动模式之间的转换过程中,准确及时的运动意图检测可以促进外骨骼的控制。由于不可避免的环境不确定性,在真实环境中检测运动意图仍然是一项挑战。错误的运动意图检测也可能导致外骨骼使用者跌倒和一般危险。为此,在这项研究中,我们开发了一种在真实环境中检测人类运动意图的方法。所提出的方法能够通过实现决策融合层来进行在线自校正。在特征提取水平上融合来自眼睛跟踪器的凝视数据和惯性测量单元(IMU)信号,并使用2种不同的方法预测运动意图。使用卷积神经网络,来自嵌入眼动仪的场景摄像机的图像被用于识别地形。基于预测的运动意图和识别的地形进行决策融合。四名身体健全的参与者佩戴眼动仪和7个IMU传感器参加了实验,以完成平地行走、坡道上升、坡道下降、楼梯上升和楼梯下降的任务。记录的实验数据用于测试所提出方法的可行性。当同时使用特征融合和决策融合时,总体准确率达到93.4%。将注视数据与IMU信号融合提高了预测精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Method of Detecting Human Movement Intentions in Real Environments.

Accurate and timely movement intention detection can facilitate exoskeleton control during transitions between different locomotion modes. Detecting movement intentions in real environments remains a challenge due to unavoidable environmental uncertainties. False movement intention detection may also induce risks of falling and general danger for exoskeleton users. To this end, in this study, we developed a method for detecting human movement intentions in real environments. The proposed method is capable of online self-correcting by implementing a decision fusion layer. Gaze data from an eye tracker and inertial measurement unit (IMU) signals were fused at the feature extraction level and used to predict movement intentions using 2 different methods. Images from the scene camera embedded on the eye tracker were used to identify terrains using a convolutional neural network. The decision fusion was made based on the predicted movement intentions and identified terrains. Four able-bodied participants wearing the eye tracker and 7 IMU sensors took part in the experiments to complete the tasks of level ground walking, ramp ascending, ramp descending, stairs ascending, and stair descending. The recorded experimental data were used to test the feasibility of the proposed method. An overall accuracy of 93.4% was achieved when both feature fusion and decision fusion were used. Fusing gaze data with IMU signals improved the prediction accuracy.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
0.50
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信