Sign language localization: Learning to eliminate language dialects

Memona Tariq, A. Iqbal, A. Zahid, Zainab Iqbal, J. Akhtar
{"title":"Sign language localization: Learning to eliminate language dialects","authors":"Memona Tariq, A. Iqbal, A. Zahid, Zainab Iqbal, J. Akhtar","doi":"10.1109/INMIC.2012.6511463","DOIUrl":null,"url":null,"abstract":"Machine translation of sign language into spoken languages is an important yet non-trivial task. The sheer variety of dialects that exist in any sign language makes it only harder to come up with a generalized sign language classification system. Though a lot of work has been done in this area previously but most of the approaches rely on intrusive hardware in the form of wired or colored gloves or are specific language/dialect dependent for accurate sign language interpretation. We propose a cost-effective, non-intrusive webcam based solution in which a person from any part of the world can train our system to make it learn the sign language in their own specific dialect, so that our software can then correctly translate the hand signs into a commonly spoken language, such as English. Image based hand gesture recognition carries sheer importance in this task. The heart of hand gesture recognition systems is the detection and extraction of the sign (hand gesture) from the input image stream. Our work uses functions like skin color based thresholding, contour detection and convexity defect for detection of hands and identification of important points on the hand respectively. The distance of these important contour points from the centroid of the hand becomes our feature vector against which we train our neural network. The system works in two phases. In the training phase the correspondence between users hand gestures against each sign language symbol is learnt using a feed forward neural network with back propagation learning algorithm. Once the training is complete, user is free to use our system for translation or communication with other people. Experimental results based on training and testing the system with numerous users show that the proposed method can work well for dialect-free sign language translation (numerals and alphabets) and gives us average recognition accuracies of around 65% and 55% with the maximum recognition accuracies rising upto 77% and 62% respectively.","PeriodicalId":396084,"journal":{"name":"2012 15th International Multitopic Conference (INMIC)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 15th International Multitopic Conference (INMIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INMIC.2012.6511463","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

Abstract

Machine translation of sign language into spoken languages is an important yet non-trivial task. The sheer variety of dialects that exist in any sign language makes it only harder to come up with a generalized sign language classification system. Though a lot of work has been done in this area previously but most of the approaches rely on intrusive hardware in the form of wired or colored gloves or are specific language/dialect dependent for accurate sign language interpretation. We propose a cost-effective, non-intrusive webcam based solution in which a person from any part of the world can train our system to make it learn the sign language in their own specific dialect, so that our software can then correctly translate the hand signs into a commonly spoken language, such as English. Image based hand gesture recognition carries sheer importance in this task. The heart of hand gesture recognition systems is the detection and extraction of the sign (hand gesture) from the input image stream. Our work uses functions like skin color based thresholding, contour detection and convexity defect for detection of hands and identification of important points on the hand respectively. The distance of these important contour points from the centroid of the hand becomes our feature vector against which we train our neural network. The system works in two phases. In the training phase the correspondence between users hand gestures against each sign language symbol is learnt using a feed forward neural network with back propagation learning algorithm. Once the training is complete, user is free to use our system for translation or communication with other people. Experimental results based on training and testing the system with numerous users show that the proposed method can work well for dialect-free sign language translation (numerals and alphabets) and gives us average recognition accuracies of around 65% and 55% with the maximum recognition accuracies rising upto 77% and 62% respectively.
手语本地化:学习消除语言方言
手语到口语的机器翻译是一项重要而非琐碎的任务。任何一种手语中都存在着各种各样的方言,这使得提出一个通用的手语分类系统变得更加困难。虽然之前在这个领域已经做了很多工作,但大多数方法都依赖于以有线或彩色手套的形式出现的侵入性硬件,或者依赖于特定的语言/方言来准确地解释手语。我们提出了一个具有成本效益的、非侵入性的基于网络摄像头的解决方案,在这个解决方案中,来自世界任何地方的人都可以训练我们的系统,让它学习他们自己特定方言的手语,这样我们的软件就可以正确地将手势翻译成一种通用的语言,比如英语。基于图像的手势识别在这项任务中具有绝对的重要性。手势识别系统的核心是从输入图像流中检测和提取符号(手势)。我们的工作分别使用基于肤色的阈值检测、轮廓检测和凸性缺陷等功能进行手部检测和手部重要点的识别。这些重要的轮廓点到手的质心的距离成为我们训练神经网络的特征向量。该系统分为两个阶段。在训练阶段,使用具有反向传播学习算法的前馈神经网络学习用户手势与每个手语符号之间的对应关系。培训完成后,用户可以自由使用我们的系统进行翻译或与其他人交流。基于大量用户的训练和测试的实验结果表明,该方法可以很好地用于无方言的手语翻译(数字和字母),平均识别准确率约为65%和55%,最大识别准确率分别达到77%和62%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信