{"title":"基于对象关系和运动特征的操作动作分层分割","authors":"Mirko Wächter, T. Asfour","doi":"10.1109/ICAR.2015.7251510","DOIUrl":null,"url":null,"abstract":"Understanding human actions is an indispensable capability of humanoid robots which acquire task knowledge from human demonstration. Segmentation of such continuous demonstrations into meaningful segments reduces the complexity of understanding an observed task. In this paper, we propose a two-level hierarchical action segmentation approach which considers semantics of an action in addition to human motion characteristics. On the first level, a semantic segmentation is performed based on contact relations between human end-effectors, the scene, and between objects in the scene. On the second level, the semantic segments are further sub-divided based on a novel heuristic that incorporates the motion characteristics into the segmentation procedure. As input for the segmentation, we present an observation method for tracking the human as well as the objects and the environment. 6D pose trajectories of the human's hands and all objects are extracted in a precise and robust manner from data of a marker-based tracking system. We evaluated and compared our approach with a manual reference segmentation and well-known segmentation algorithms based on PCA and zero-velocity-crossings using 13 human demonstrations of daily activities.We show that significantly smaller segmentation errors are achieved with our approach while providing the necessary granularity for representing human demonstrations.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"39","resultStr":"{\"title\":\"Hierarchical segmentation of manipulation actions based on object relations and motion characteristics\",\"authors\":\"Mirko Wächter, T. Asfour\",\"doi\":\"10.1109/ICAR.2015.7251510\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Understanding human actions is an indispensable capability of humanoid robots which acquire task knowledge from human demonstration. Segmentation of such continuous demonstrations into meaningful segments reduces the complexity of understanding an observed task. In this paper, we propose a two-level hierarchical action segmentation approach which considers semantics of an action in addition to human motion characteristics. On the first level, a semantic segmentation is performed based on contact relations between human end-effectors, the scene, and between objects in the scene. On the second level, the semantic segments are further sub-divided based on a novel heuristic that incorporates the motion characteristics into the segmentation procedure. As input for the segmentation, we present an observation method for tracking the human as well as the objects and the environment. 6D pose trajectories of the human's hands and all objects are extracted in a precise and robust manner from data of a marker-based tracking system. We evaluated and compared our approach with a manual reference segmentation and well-known segmentation algorithms based on PCA and zero-velocity-crossings using 13 human demonstrations of daily activities.We show that significantly smaller segmentation errors are achieved with our approach while providing the necessary granularity for representing human demonstrations.\",\"PeriodicalId\":432004,\"journal\":{\"name\":\"2015 International Conference on Advanced Robotics (ICAR)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-07-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"39\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 International Conference on Advanced Robotics (ICAR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICAR.2015.7251510\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Conference on Advanced Robotics (ICAR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAR.2015.7251510","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Hierarchical segmentation of manipulation actions based on object relations and motion characteristics
Understanding human actions is an indispensable capability of humanoid robots which acquire task knowledge from human demonstration. Segmentation of such continuous demonstrations into meaningful segments reduces the complexity of understanding an observed task. In this paper, we propose a two-level hierarchical action segmentation approach which considers semantics of an action in addition to human motion characteristics. On the first level, a semantic segmentation is performed based on contact relations between human end-effectors, the scene, and between objects in the scene. On the second level, the semantic segments are further sub-divided based on a novel heuristic that incorporates the motion characteristics into the segmentation procedure. As input for the segmentation, we present an observation method for tracking the human as well as the objects and the environment. 6D pose trajectories of the human's hands and all objects are extracted in a precise and robust manner from data of a marker-based tracking system. We evaluated and compared our approach with a manual reference segmentation and well-known segmentation algorithms based on PCA and zero-velocity-crossings using 13 human demonstrations of daily activities.We show that significantly smaller segmentation errors are achieved with our approach while providing the necessary granularity for representing human demonstrations.