Integrating kinect with openCV to interpret interaction via gestures

Igor G. Pimenta, Livia N. Sarmento, A. H. Kronbauer, B. B. Araujo
{"title":"Integrating kinect with openCV to interpret interaction via gestures","authors":"Igor G. Pimenta, Livia N. Sarmento, A. H. Kronbauer, B. B. Araujo","doi":"10.1145/3148456.3148469","DOIUrl":null,"url":null,"abstract":"We can see progress in the area of Ambient Intelligence (AmIs) with the improvement of embedded systems and technologies using wireless networks. Moreover, the development of studies on interaction between human beings and electronic devices have become increasingly more natural. This new scenario favors the development of ubiquitous computing, in which the relevance of body language is gradually increasing. In this paper, we propose a model of interaction via gestures and test its efficiency through the creation of an infrastructure. In order to assess its usability we developed an experiment with potential users and identified good results. We found that by combining the images identified by Kinect and the interpretation of gestures from OpenCV, improved the gesture recognition greatly.","PeriodicalId":423409,"journal":{"name":"Proceedings of the 14th Brazilian Symposium on Human Factors in Computing Systems","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 14th Brazilian Symposium on Human Factors in Computing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3148456.3148469","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

We can see progress in the area of Ambient Intelligence (AmIs) with the improvement of embedded systems and technologies using wireless networks. Moreover, the development of studies on interaction between human beings and electronic devices have become increasingly more natural. This new scenario favors the development of ubiquitous computing, in which the relevance of body language is gradually increasing. In this paper, we propose a model of interaction via gestures and test its efficiency through the creation of an infrastructure. In order to assess its usability we developed an experiment with potential users and identified good results. We found that by combining the images identified by Kinect and the interpretation of gestures from OpenCV, improved the gesture recognition greatly.
将kinect与openCV集成,通过手势解释交互
随着嵌入式系统和无线网络技术的改进,我们可以看到环境智能(AmIs)领域的进步。此外,人与电子设备互动研究的发展也变得越来越自然。这种新的场景有利于普适计算的发展,在普适计算中,肢体语言的相关性正在逐渐增加。在本文中,我们提出了一个通过手势交互的模型,并通过创建基础设施来测试其效率。为了评估其可用性,我们开发了一个潜在用户的实验,并确定了良好的结果。我们发现,通过将Kinect识别的图像与OpenCV的手势解释相结合,极大地提高了手势识别。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信