{"title":"Applying attributes to improve human activity recognition","authors":"D. Tahmoush, Claire Bonial","doi":"10.1109/AIPR.2015.7444553","DOIUrl":null,"url":null,"abstract":"Activity and event recognition from video has utilized low-level features over higher-level text-based class attributes and ontologies because they traditionally have been more effective on small datasets. However, by including human knowledge-driven associations between actions and attributes while recognizing the lower-level attributes with their temporal relationships, we can learn a much greater set of activities as well as improve low-level feature-based algorithms by incorporating an expert knowledge ontology. In an event ontology, events can be broken down into actions, and these can be decomposed further into attributes. For example, throwing events can include throwing of stones or baseballs with the object being relocated from a hand through the air to a location of interest. The throwing can be broken down into the many physical attributes that can be used to describe the motion like BodyPartsUsed = Hands, BodyPartArticulation-Arm = OneArmRaisedOverHead, and many others. Building general attributes from video and merging them into an ontology for recognition allows significant reuse for the development of activity and event classifiers. Each activity or event classifier is composed of interacting attributes the same way sentences are composed of interacting letters to create a complete language.","PeriodicalId":440673,"journal":{"name":"2015 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIPR.2015.7444553","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Activity and event recognition from video has utilized low-level features over higher-level text-based class attributes and ontologies because they traditionally have been more effective on small datasets. However, by including human knowledge-driven associations between actions and attributes while recognizing the lower-level attributes with their temporal relationships, we can learn a much greater set of activities as well as improve low-level feature-based algorithms by incorporating an expert knowledge ontology. In an event ontology, events can be broken down into actions, and these can be decomposed further into attributes. For example, throwing events can include throwing of stones or baseballs with the object being relocated from a hand through the air to a location of interest. The throwing can be broken down into the many physical attributes that can be used to describe the motion like BodyPartsUsed = Hands, BodyPartArticulation-Arm = OneArmRaisedOverHead, and many others. Building general attributes from video and merging them into an ontology for recognition allows significant reuse for the development of activity and event classifiers. Each activity or event classifier is composed of interacting attributes the same way sentences are composed of interacting letters to create a complete language.