辅助机器人的自动动作识别,以支持家中轻度认知障碍患者

G. Stavropoulos, Dimitrios Giakoumis, K. Moustakas, D. Tzovaras
{"title":"辅助机器人的自动动作识别,以支持家中轻度认知障碍患者","authors":"G. Stavropoulos, Dimitrios Giakoumis, K. Moustakas, D. Tzovaras","doi":"10.1145/3056540.3076185","DOIUrl":null,"url":null,"abstract":"This paper presents a novel approach for automatic human action recognition, focusing on user behaviour monitoring needs of assistive robots that aim to support Mild Cognitive Impairment (MCI) patients at home. Our action recognition method utilizes the human's skeleton joints information, extracted from a low-cost depth sensor mounted on a service robot. Herein, we extend the state of art EigenJoints descriptor to improve recognition robustness for a series of actions involved in common daily activities. Specifically, we introduce novel features, so as to take into account action specificities such as the joints' travelled distance and their evolution trend in subsequent frames to the reference one. In addition, we use information related to the user's manipulated objects, taking into account that several actions may be similar, yet performed with different objects, as well as the fact that real, practical applications involve continuous input video streams rather than pre-segmented action sequences. Through experimental evaluation on the MSR Action3D dataset, our approach has been found to outperform the state of art in action recognition performance. Evaluation has also been performed on a custom dataset, providing further promising results for future practical applications of our overall action recognition framework.","PeriodicalId":140232,"journal":{"name":"Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":"{\"title\":\"Automatic action recognition for assistive robots to support MCI patients at home\",\"authors\":\"G. Stavropoulos, Dimitrios Giakoumis, K. Moustakas, D. Tzovaras\",\"doi\":\"10.1145/3056540.3076185\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents a novel approach for automatic human action recognition, focusing on user behaviour monitoring needs of assistive robots that aim to support Mild Cognitive Impairment (MCI) patients at home. Our action recognition method utilizes the human's skeleton joints information, extracted from a low-cost depth sensor mounted on a service robot. Herein, we extend the state of art EigenJoints descriptor to improve recognition robustness for a series of actions involved in common daily activities. Specifically, we introduce novel features, so as to take into account action specificities such as the joints' travelled distance and their evolution trend in subsequent frames to the reference one. In addition, we use information related to the user's manipulated objects, taking into account that several actions may be similar, yet performed with different objects, as well as the fact that real, practical applications involve continuous input video streams rather than pre-segmented action sequences. Through experimental evaluation on the MSR Action3D dataset, our approach has been found to outperform the state of art in action recognition performance. Evaluation has also been performed on a custom dataset, providing further promising results for future practical applications of our overall action recognition framework.\",\"PeriodicalId\":140232,\"journal\":{\"name\":\"Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments\",\"volume\":\"5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-06-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"16\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3056540.3076185\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3056540.3076185","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16

摘要

本文提出了一种自动人类行为识别的新方法,重点关注辅助机器人的用户行为监测需求,旨在支持家中轻度认知障碍(MCI)患者。我们的动作识别方法利用从安装在服务机器人上的低成本深度传感器中提取的人体骨骼关节信息。在这里,我们扩展了最先进的特征关节描述符,以提高对日常活动中涉及的一系列动作的识别鲁棒性。具体来说,我们引入了新的特征,以便考虑关节在后续帧中的移动距离及其向参考帧的演变趋势等动作特性。此外,我们使用与用户操作对象相关的信息,考虑到几个动作可能是相似的,但在不同的对象上执行,以及真实的实际应用涉及连续输入视频流而不是预分割的动作序列。通过对MSR Action3D数据集的实验评估,我们的方法在动作识别性能方面优于目前的技术水平。对自定义数据集也进行了评估,为我们的整体动作识别框架的未来实际应用提供了进一步的有希望的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Automatic action recognition for assistive robots to support MCI patients at home
This paper presents a novel approach for automatic human action recognition, focusing on user behaviour monitoring needs of assistive robots that aim to support Mild Cognitive Impairment (MCI) patients at home. Our action recognition method utilizes the human's skeleton joints information, extracted from a low-cost depth sensor mounted on a service robot. Herein, we extend the state of art EigenJoints descriptor to improve recognition robustness for a series of actions involved in common daily activities. Specifically, we introduce novel features, so as to take into account action specificities such as the joints' travelled distance and their evolution trend in subsequent frames to the reference one. In addition, we use information related to the user's manipulated objects, taking into account that several actions may be similar, yet performed with different objects, as well as the fact that real, practical applications involve continuous input video streams rather than pre-segmented action sequences. Through experimental evaluation on the MSR Action3D dataset, our approach has been found to outperform the state of art in action recognition performance. Evaluation has also been performed on a custom dataset, providing further promising results for future practical applications of our overall action recognition framework.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信