手势翻译中姿态估计模型的针对性评价

K. Amrutha, P. Prabu
{"title":"手势翻译中姿态估计模型的针对性评价","authors":"K. Amrutha, P. Prabu","doi":"10.1142/s1469026823410092","DOIUrl":null,"url":null,"abstract":"Sign Language is the natural language used by a community that is hearing impaired. It is necessary to convert this language to a commonly understandable form as it is used by a comparatively small part of society. The automatic Sign Language interpreters can convert the signs into text or audio by interpreting the hand movements and the corresponding facial expression. These two modalities work in tandem to give complete meaning to each word. In verbal communication, emotions can be conveyed by changing the tone and pitch of the voice, but in sign language, emotions are expressed using nonmanual movements that include body posture and facial muscle movements. Each such subtle moment should be considered as a feature and extracted using different models. This paper proposes three different models that can be used for varying levels of sign language. The first test was carried out using the Convex Hull-based Sign Language Recognition (SLR) finger spelling sign language, next using a Convolution Neural Network-based Sign Language Recognition (CNN-SLR) for fingerspelling sign language, and finally pose-based SLR for word-level sign language. The experiments show that the pose-based SLR model that captures features using landmark or key points has better SLR accuracy than Convex Hull and CNN-based SLR models.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evaluating the Pertinence of Pose Estimation model for Sign Language Translation\",\"authors\":\"K. Amrutha, P. Prabu\",\"doi\":\"10.1142/s1469026823410092\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Sign Language is the natural language used by a community that is hearing impaired. It is necessary to convert this language to a commonly understandable form as it is used by a comparatively small part of society. The automatic Sign Language interpreters can convert the signs into text or audio by interpreting the hand movements and the corresponding facial expression. These two modalities work in tandem to give complete meaning to each word. In verbal communication, emotions can be conveyed by changing the tone and pitch of the voice, but in sign language, emotions are expressed using nonmanual movements that include body posture and facial muscle movements. Each such subtle moment should be considered as a feature and extracted using different models. This paper proposes three different models that can be used for varying levels of sign language. The first test was carried out using the Convex Hull-based Sign Language Recognition (SLR) finger spelling sign language, next using a Convolution Neural Network-based Sign Language Recognition (CNN-SLR) for fingerspelling sign language, and finally pose-based SLR for word-level sign language. The experiments show that the pose-based SLR model that captures features using landmark or key points has better SLR accuracy than Convex Hull and CNN-based SLR models.\",\"PeriodicalId\":422521,\"journal\":{\"name\":\"Int. J. Comput. Intell. Appl.\",\"volume\":\"37 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Int. J. Comput. Intell. Appl.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1142/s1469026823410092\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Comput. Intell. Appl.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/s1469026823410092","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

手语是听力受损群体使用的自然语言。有必要把这种语言转换成一种通俗易懂的形式,因为它只被社会上相对较小的一部分人使用。自动手语翻译可以通过解读手势动作和相应的面部表情,将手势转换成文字或音频。这两种模式协同工作,赋予每个单词完整的含义。在语言交流中,情绪可以通过改变声音的音调和音高来传达,但在手语中,情绪是通过包括身体姿势和面部肌肉运动在内的非手动动作来表达的。每个这样微妙的时刻都应该被视为一个特征,并使用不同的模型进行提取。本文提出了三种不同的模型,可用于不同水平的手语。首先使用基于凸壳的手语识别(SLR)进行手指拼写手语测试,然后使用基于卷积神经网络的手语识别(CNN-SLR)进行手指拼写手语测试,最后使用基于姿势的SLR进行单词级手语测试。实验表明,利用地标或关键点捕获特征的基于姿态的单反模型比基于凸壳和cnn的单反模型具有更好的单反精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Evaluating the Pertinence of Pose Estimation model for Sign Language Translation
Sign Language is the natural language used by a community that is hearing impaired. It is necessary to convert this language to a commonly understandable form as it is used by a comparatively small part of society. The automatic Sign Language interpreters can convert the signs into text or audio by interpreting the hand movements and the corresponding facial expression. These two modalities work in tandem to give complete meaning to each word. In verbal communication, emotions can be conveyed by changing the tone and pitch of the voice, but in sign language, emotions are expressed using nonmanual movements that include body posture and facial muscle movements. Each such subtle moment should be considered as a feature and extracted using different models. This paper proposes three different models that can be used for varying levels of sign language. The first test was carried out using the Convex Hull-based Sign Language Recognition (SLR) finger spelling sign language, next using a Convolution Neural Network-based Sign Language Recognition (CNN-SLR) for fingerspelling sign language, and finally pose-based SLR for word-level sign language. The experiments show that the pose-based SLR model that captures features using landmark or key points has better SLR accuracy than Convex Hull and CNN-based SLR models.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信