Hand Sign Recognition using Infrared Imagery Provided by Leap Motion Controller and Computer Vision

Tathagat Banerjee, K. Srikar, S. Reddy, Krishna Sai Biradar, Rithika Reddy Koripally, Gummadi. Varshith
{"title":"Hand Sign Recognition using Infrared Imagery Provided by Leap Motion Controller and Computer Vision","authors":"Tathagat Banerjee, K. Srikar, S. Reddy, Krishna Sai Biradar, Rithika Reddy Koripally, Gummadi. Varshith","doi":"10.1109/ICIPTM52218.2021.9388334","DOIUrl":null,"url":null,"abstract":"Speech Impairment and conversion of sign language to human re-engineered audio signals is something computer science has always been interested in. However, the architectural robustness and extraction of features on a very insignificant area of change have posed decade long problems to achieve this idea. The paper proposes a Convolutional Neural network based on a deep belief model on Data imagery collected by leap motion controllers on hand sign recognition. The database is composed of 10 different hand-gestures that were performed by 10 different subjects (5 men and 5 women) which is presented, composed by a set of near-infrared images acquired by the Leap Motion sensor. The paper tries to achieve high accuracy on the pertaining training set inorder to create and form a robust model. It embraces the first step towards image understanding of human signs and aid specially-abled people. We have implemented and tested the algorithm for 2000 images each class. The paper achieves the accuracy and precision of 99.4% and 99.68% respectively. The implications of the study intend to enhance understanding of infrared imagery for small areas of localization feature detection and intend to help the idea of human audio re-engineering a resurgence by using the same.","PeriodicalId":315265,"journal":{"name":"2021 International Conference on Innovative Practices in Technology and Management (ICIPTM)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Innovative Practices in Technology and Management (ICIPTM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIPTM52218.2021.9388334","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Speech Impairment and conversion of sign language to human re-engineered audio signals is something computer science has always been interested in. However, the architectural robustness and extraction of features on a very insignificant area of change have posed decade long problems to achieve this idea. The paper proposes a Convolutional Neural network based on a deep belief model on Data imagery collected by leap motion controllers on hand sign recognition. The database is composed of 10 different hand-gestures that were performed by 10 different subjects (5 men and 5 women) which is presented, composed by a set of near-infrared images acquired by the Leap Motion sensor. The paper tries to achieve high accuracy on the pertaining training set inorder to create and form a robust model. It embraces the first step towards image understanding of human signs and aid specially-abled people. We have implemented and tested the algorithm for 2000 images each class. The paper achieves the accuracy and precision of 99.4% and 99.68% respectively. The implications of the study intend to enhance understanding of infrared imagery for small areas of localization feature detection and intend to help the idea of human audio re-engineering a resurgence by using the same.
使用Leap运动控制器和计算机视觉提供的红外图像进行手势识别
语言障碍和将手语转换为人类重新设计的音频信号是计算机科学一直感兴趣的问题。然而,架构的健壮性和特征的提取在一个非常微不足道的变化领域已经提出了长达十年的问题来实现这个想法。本文提出了一种基于深度信念模型的卷积神经网络,用于手势识别的跳跃运动控制器采集的数据图像。该数据库由10个不同受试者(5男5女)的10种不同手势组成,由Leap Motion传感器获取的一组近红外图像组成。本文试图在相关的训练集上达到较高的准确率,以创建并形成一个鲁棒模型。它迈出了理解人类符号的第一步,帮助了残疾人。我们已经为每个类的2000张图像实现并测试了该算法。本文的准确度和精密度分别达到99.4%和99.68%。该研究的意义在于增强对红外图像在小区域定位特征检测中的理解,并通过使用相同的方法帮助人类音频重新设计的想法复苏。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信