{"title":"An ontology for reasoning on body-based gestures","authors":"Mehdi Ousmer, J. Vanderdonckt, S. Buraga","doi":"10.1145/3319499.3328238","DOIUrl":null,"url":null,"abstract":"Body-based gestures, such as acquired by Kinect sensor, today benefit from efficient tools for their recognition and development, but less for automated reasoning. To facilitate this activity, an ontology for structuring body-based gestures, based on user, body and body parts, gestures, and environment, is designed and encoded in Ontology Web Language according to modelling triples (subject, predicate, object). As a proof-of-concept and to feed this ontology, a gesture elicitation study collected 24 participants X 19 referents for IoT tasks = 456 elicited body-based gestures, which were classified and expressed according to the ontology.","PeriodicalId":185267,"journal":{"name":"Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3319499.3328238","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
Body-based gestures, such as acquired by Kinect sensor, today benefit from efficient tools for their recognition and development, but less for automated reasoning. To facilitate this activity, an ontology for structuring body-based gestures, based on user, body and body parts, gestures, and environment, is designed and encoded in Ontology Web Language according to modelling triples (subject, predicate, object). As a proof-of-concept and to feed this ontology, a gesture elicitation study collected 24 participants X 19 referents for IoT tasks = 456 elicited body-based gestures, which were classified and expressed according to the ontology.
基于身体的手势,比如Kinect传感器获得的手势,今天受益于高效的识别和开发工具,但较少用于自动推理。为了促进这一活动,根据建模三元组(主语、谓语、宾语),在ontology Web Language中设计并编码了一个基于用户、身体和身体部位、手势和环境来构造基于身体的手势的本体。作为概念验证并为该本体提供佐证,一项手势启发研究收集了24名参与者X 19个IoT任务参照物= 456个基于身体的手势,并根据本体进行分类和表达。