J. E. Hunter, D. Wilkes, D. Levin, C. Heaton, M. Saylor
{"title":"Autonomous segmentation of human action for behaviour analysis","authors":"J. E. Hunter, D. Wilkes, D. Levin, C. Heaton, M. Saylor","doi":"10.1109/DEVLRN.2008.4640838","DOIUrl":null,"url":null,"abstract":"To correctly understand human actions, it is necessary to segment a continuous series of movements into units that can be associated with meaningful goals and subgoals. Recent research in cognitive science and machine vision has explored the perceptual and conceptual factors that (a) determine the segment boundaries that human observers place in a range of actions, and (b) allow successful discrimination among different action-types. In this project we investigated the degree to which specific movements effectively predict key sub-events in a broad range of actions in which a human model interacts with objects. In addition, we aimed to create an accessible tool to track human actions for use in a wide range of machine vision and cognitive science applications. Results from our analysis suggest that a set of basic movement cues can successfully predict key sub-events such as hand-to-object contact, across a wide range of specific tasks, and we specify parameters under which this prediction might be maximized.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 7th IEEE International Conference on Development and Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DEVLRN.2008.4640838","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
To correctly understand human actions, it is necessary to segment a continuous series of movements into units that can be associated with meaningful goals and subgoals. Recent research in cognitive science and machine vision has explored the perceptual and conceptual factors that (a) determine the segment boundaries that human observers place in a range of actions, and (b) allow successful discrimination among different action-types. In this project we investigated the degree to which specific movements effectively predict key sub-events in a broad range of actions in which a human model interacts with objects. In addition, we aimed to create an accessible tool to track human actions for use in a wide range of machine vision and cognitive science applications. Results from our analysis suggest that a set of basic movement cues can successfully predict key sub-events such as hand-to-object contact, across a wide range of specific tasks, and we specify parameters under which this prediction might be maximized.