Igor G. Pimenta, Livia N. Sarmento, A. H. Kronbauer, B. B. Araujo
{"title":"Integrating kinect with openCV to interpret interaction via gestures","authors":"Igor G. Pimenta, Livia N. Sarmento, A. H. Kronbauer, B. B. Araujo","doi":"10.1145/3148456.3148469","DOIUrl":null,"url":null,"abstract":"We can see progress in the area of Ambient Intelligence (AmIs) with the improvement of embedded systems and technologies using wireless networks. Moreover, the development of studies on interaction between human beings and electronic devices have become increasingly more natural. This new scenario favors the development of ubiquitous computing, in which the relevance of body language is gradually increasing. In this paper, we propose a model of interaction via gestures and test its efficiency through the creation of an infrastructure. In order to assess its usability we developed an experiment with potential users and identified good results. We found that by combining the images identified by Kinect and the interpretation of gestures from OpenCV, improved the gesture recognition greatly.","PeriodicalId":423409,"journal":{"name":"Proceedings of the 14th Brazilian Symposium on Human Factors in Computing Systems","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 14th Brazilian Symposium on Human Factors in Computing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3148456.3148469","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
We can see progress in the area of Ambient Intelligence (AmIs) with the improvement of embedded systems and technologies using wireless networks. Moreover, the development of studies on interaction between human beings and electronic devices have become increasingly more natural. This new scenario favors the development of ubiquitous computing, in which the relevance of body language is gradually increasing. In this paper, we propose a model of interaction via gestures and test its efficiency through the creation of an infrastructure. In order to assess its usability we developed an experiment with potential users and identified good results. We found that by combining the images identified by Kinect and the interpretation of gestures from OpenCV, improved the gesture recognition greatly.