基于对象关系和运动特征的操作动作分层分割

Mirko Wächter, T. Asfour
{"title":"基于对象关系和运动特征的操作动作分层分割","authors":"Mirko Wächter, T. Asfour","doi":"10.1109/ICAR.2015.7251510","DOIUrl":null,"url":null,"abstract":"Understanding human actions is an indispensable capability of humanoid robots which acquire task knowledge from human demonstration. Segmentation of such continuous demonstrations into meaningful segments reduces the complexity of understanding an observed task. In this paper, we propose a two-level hierarchical action segmentation approach which considers semantics of an action in addition to human motion characteristics. On the first level, a semantic segmentation is performed based on contact relations between human end-effectors, the scene, and between objects in the scene. On the second level, the semantic segments are further sub-divided based on a novel heuristic that incorporates the motion characteristics into the segmentation procedure. As input for the segmentation, we present an observation method for tracking the human as well as the objects and the environment. 6D pose trajectories of the human's hands and all objects are extracted in a precise and robust manner from data of a marker-based tracking system. We evaluated and compared our approach with a manual reference segmentation and well-known segmentation algorithms based on PCA and zero-velocity-crossings using 13 human demonstrations of daily activities.We show that significantly smaller segmentation errors are achieved with our approach while providing the necessary granularity for representing human demonstrations.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"39","resultStr":"{\"title\":\"Hierarchical segmentation of manipulation actions based on object relations and motion characteristics\",\"authors\":\"Mirko Wächter, T. Asfour\",\"doi\":\"10.1109/ICAR.2015.7251510\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Understanding human actions is an indispensable capability of humanoid robots which acquire task knowledge from human demonstration. Segmentation of such continuous demonstrations into meaningful segments reduces the complexity of understanding an observed task. In this paper, we propose a two-level hierarchical action segmentation approach which considers semantics of an action in addition to human motion characteristics. On the first level, a semantic segmentation is performed based on contact relations between human end-effectors, the scene, and between objects in the scene. On the second level, the semantic segments are further sub-divided based on a novel heuristic that incorporates the motion characteristics into the segmentation procedure. As input for the segmentation, we present an observation method for tracking the human as well as the objects and the environment. 6D pose trajectories of the human's hands and all objects are extracted in a precise and robust manner from data of a marker-based tracking system. We evaluated and compared our approach with a manual reference segmentation and well-known segmentation algorithms based on PCA and zero-velocity-crossings using 13 human demonstrations of daily activities.We show that significantly smaller segmentation errors are achieved with our approach while providing the necessary granularity for representing human demonstrations.\",\"PeriodicalId\":432004,\"journal\":{\"name\":\"2015 International Conference on Advanced Robotics (ICAR)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-07-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"39\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 International Conference on Advanced Robotics (ICAR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICAR.2015.7251510\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Conference on Advanced Robotics (ICAR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAR.2015.7251510","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 39

摘要

理解人的行为是类人机器人从人的演示中获取任务知识必不可少的能力。将这种连续的演示分割成有意义的部分,可以降低理解观察任务的复杂性。在本文中,我们提出了一种考虑动作语义和人体运动特征的两级分层动作分割方法。在第一级,基于人体末端执行器、场景以及场景中物体之间的接触关系进行语义分割。在第二层次上,基于一种新的启发式方法进一步细分语义段,该方法将运动特征纳入分割过程。作为分割的输入,我们提出了一种跟踪人、物体和环境的观察方法。从基于标记的跟踪系统的数据中以精确和鲁棒的方式提取人的手和所有物体的6D姿态轨迹。我们评估并比较了我们的方法与人工参考分割和基于PCA和零速度交叉的知名分割算法,使用13个人类日常活动的演示。我们表明,使用我们的方法可以实现更小的分割错误,同时为表示人类演示提供必要的粒度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Hierarchical segmentation of manipulation actions based on object relations and motion characteristics
Understanding human actions is an indispensable capability of humanoid robots which acquire task knowledge from human demonstration. Segmentation of such continuous demonstrations into meaningful segments reduces the complexity of understanding an observed task. In this paper, we propose a two-level hierarchical action segmentation approach which considers semantics of an action in addition to human motion characteristics. On the first level, a semantic segmentation is performed based on contact relations between human end-effectors, the scene, and between objects in the scene. On the second level, the semantic segments are further sub-divided based on a novel heuristic that incorporates the motion characteristics into the segmentation procedure. As input for the segmentation, we present an observation method for tracking the human as well as the objects and the environment. 6D pose trajectories of the human's hands and all objects are extracted in a precise and robust manner from data of a marker-based tracking system. We evaluated and compared our approach with a manual reference segmentation and well-known segmentation algorithms based on PCA and zero-velocity-crossings using 13 human demonstrations of daily activities.We show that significantly smaller segmentation errors are achieved with our approach while providing the necessary granularity for representing human demonstrations.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信