Continuous Sign Language Interpretation to Text Using Deep Learning Models

Afridi Ibn Rahman, Zebel-E.-Noor Akhand, Tasin Al Nahian Khan, Anirudh Sarda, Subhi Bhuiyan, Mma Rakib, Zubayer Ahmed Fahim, Indronil Kundu
{"title":"Continuous Sign Language Interpretation to Text Using Deep Learning Models","authors":"Afridi Ibn Rahman, Zebel-E.-Noor Akhand, Tasin Al Nahian Khan, Anirudh Sarda, Subhi Bhuiyan, Mma Rakib, Zubayer Ahmed Fahim, Indronil Kundu","doi":"10.1109/ICCIT57492.2022.10054721","DOIUrl":null,"url":null,"abstract":"The COVID-19 pandemic has obligated people to adopt the virtual lifestyle. Currently, the use of videoconferencing to conduct business meetings is prevalent owing to the numerous benefits it presents. However, a large number of people with speech impediment find themselves handicapped to the new normal as they cannot communicate their ideas effectively, especially in fast paced meetings. Therefore, this paper aims to introduce an enriched dataset using an action recognition method with the most common phrases translated into American Sign Language (ASL) that are routinely used in professional meetings. It further proposes a sign language detecting and classifying model employing deep learning architectures, namely, CNN and LSTM. The performances of these models are analysed by employing different performance metrics like accuracy, recall, F1- Score and Precision. CNN and LSTM models yield an accuracy of 93.75% and 96.54% respectively, after being trained with the dataset introduced in this study. Therefore, the incorporation of the LSTM model into different cloud services, virtual private networks and softwares will allow people with speech impairment to use sign language, which will automatically be translated into captions using moving camera circumstances in real time. This will in turn equip other people with the tool to understand and grasp the message that is being conveyed and easily discuss and effectuate the ideas.","PeriodicalId":255498,"journal":{"name":"2022 25th International Conference on Computer and Information Technology (ICCIT)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 25th International Conference on Computer and Information Technology (ICCIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCIT57492.2022.10054721","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The COVID-19 pandemic has obligated people to adopt the virtual lifestyle. Currently, the use of videoconferencing to conduct business meetings is prevalent owing to the numerous benefits it presents. However, a large number of people with speech impediment find themselves handicapped to the new normal as they cannot communicate their ideas effectively, especially in fast paced meetings. Therefore, this paper aims to introduce an enriched dataset using an action recognition method with the most common phrases translated into American Sign Language (ASL) that are routinely used in professional meetings. It further proposes a sign language detecting and classifying model employing deep learning architectures, namely, CNN and LSTM. The performances of these models are analysed by employing different performance metrics like accuracy, recall, F1- Score and Precision. CNN and LSTM models yield an accuracy of 93.75% and 96.54% respectively, after being trained with the dataset introduced in this study. Therefore, the incorporation of the LSTM model into different cloud services, virtual private networks and softwares will allow people with speech impairment to use sign language, which will automatically be translated into captions using moving camera circumstances in real time. This will in turn equip other people with the tool to understand and grasp the message that is being conveyed and easily discuss and effectuate the ideas.
使用深度学习模型的连续手语文本解释
新冠肺炎疫情迫使人们采用虚拟生活方式。目前,使用视频会议进行商务会议是普遍的,因为它提供了许多好处。然而,大量有语言障碍的人发现自己无法适应新常态,因为他们无法有效地表达自己的想法,尤其是在快节奏的会议中。因此,本文旨在引入一个丰富的数据集,使用一种动作识别方法,将最常见的短语翻译成专业会议中经常使用的美国手语(ASL)。在此基础上,提出了一种基于CNN和LSTM深度学习的手语检测与分类模型。通过采用不同的性能指标,如准确率、召回率、F1- Score和精度,分析了这些模型的性能。CNN和LSTM模型使用本文引入的数据集进行训练后,准确率分别为93.75%和96.54%。因此,将LSTM模型整合到不同的云服务、虚拟专用网络和软件中,将允许有语言障碍的人使用手语,这些手语将通过移动的摄像机环境实时自动翻译成字幕。这将反过来为其他人提供理解和掌握所传达的信息的工具,并容易讨论和实现这些想法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信