手语识别与识别的比较研究

IF 1.1 Q3 COMPUTER SCIENCE, THEORY & METHODS
Ahmed A. Sultan, Walied Makram, Mohammed Kayed, Abdelmaged Amin Ali
{"title":"手语识别与识别的比较研究","authors":"Ahmed A. Sultan, Walied Makram, Mohammed Kayed, Abdelmaged Amin Ali","doi":"10.1515/comp-2022-0240","DOIUrl":null,"url":null,"abstract":"Abstract Sign Language (SL) is the main language for handicapped and disabled people. Each country has its own SL that is different from other countries. Each sign in a language is represented with variant hand gestures, body movements, and facial expressions. Researchers in this field aim to remove any obstacles that prevent the communication with deaf people by replacing all device-based techniques with vision-based techniques using Artificial Intelligence (AI) and Deep Learning. This article highlights two main SL processing tasks: Sign Language Recognition (SLR) and Sign Language Identification (SLID). The latter task is targeted to identify the signer language, while the former is aimed to translate the signer conversation into tokens (signs). The article addresses the most common datasets used in the literature for the two tasks (static and dynamic datasets that are collected from different corpora) with different contents including numerical, alphabets, words, and sentences from different SLs. It also discusses the devices required to build these datasets, as well as the different preprocessing steps applied before training and testing. The article compares the different approaches and techniques applied on these datasets. It discusses both the vision-based and the data-gloves-based approaches, aiming to analyze and focus on main methods used in vision-based approaches such as hybrid methods and deep learning algorithms. Furthermore, the article presents a graphical depiction and a tabular representation of various SLR approaches.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"191 - 210"},"PeriodicalIF":1.1000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Sign language identification and recognition: A comparative study\",\"authors\":\"Ahmed A. Sultan, Walied Makram, Mohammed Kayed, Abdelmaged Amin Ali\",\"doi\":\"10.1515/comp-2022-0240\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Sign Language (SL) is the main language for handicapped and disabled people. Each country has its own SL that is different from other countries. Each sign in a language is represented with variant hand gestures, body movements, and facial expressions. Researchers in this field aim to remove any obstacles that prevent the communication with deaf people by replacing all device-based techniques with vision-based techniques using Artificial Intelligence (AI) and Deep Learning. This article highlights two main SL processing tasks: Sign Language Recognition (SLR) and Sign Language Identification (SLID). The latter task is targeted to identify the signer language, while the former is aimed to translate the signer conversation into tokens (signs). The article addresses the most common datasets used in the literature for the two tasks (static and dynamic datasets that are collected from different corpora) with different contents including numerical, alphabets, words, and sentences from different SLs. It also discusses the devices required to build these datasets, as well as the different preprocessing steps applied before training and testing. The article compares the different approaches and techniques applied on these datasets. It discusses both the vision-based and the data-gloves-based approaches, aiming to analyze and focus on main methods used in vision-based approaches such as hybrid methods and deep learning algorithms. Furthermore, the article presents a graphical depiction and a tabular representation of various SLR approaches.\",\"PeriodicalId\":43014,\"journal\":{\"name\":\"Open Computer Science\",\"volume\":\"12 1\",\"pages\":\"191 - 210\"},\"PeriodicalIF\":1.1000,\"publicationDate\":\"2022-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Open Computer Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1515/comp-2022-0240\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Open Computer Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1515/comp-2022-0240","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 5

摘要

摘要手语是残疾人的主要语言。每个国家都有自己不同于其他国家的SL。语言中的每个符号都用不同的手势、肢体动作和面部表情来表示。该领域的研究人员旨在通过使用人工智能(AI)和深度学习,用基于视觉的技术取代所有基于设备的技术,消除阻碍与聋人交流的任何障碍。本文重点介绍了两个主要的SL处理任务:手语识别(SLR)和手语识别(SLID)。后一项任务旨在识别签名者的语言,而前一项任务则旨在将签名者的对话转换为令牌(符号)。这篇文章介绍了文献中用于这两项任务的最常见的数据集(从不同语料库中收集的静态和动态数据集),其内容不同,包括来自不同SL的数字、字母、单词和句子。它还讨论了构建这些数据集所需的设备,以及在训练和测试之前应用的不同预处理步骤。本文比较了应用于这些数据集的不同方法和技术。它讨论了基于视觉和基于数据手套的方法,旨在分析和关注基于视觉的方法中使用的主要方法,如混合方法和深度学习算法。此外,本文还提供了各种SLR方法的图形描述和表格表示。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Sign language identification and recognition: A comparative study
Abstract Sign Language (SL) is the main language for handicapped and disabled people. Each country has its own SL that is different from other countries. Each sign in a language is represented with variant hand gestures, body movements, and facial expressions. Researchers in this field aim to remove any obstacles that prevent the communication with deaf people by replacing all device-based techniques with vision-based techniques using Artificial Intelligence (AI) and Deep Learning. This article highlights two main SL processing tasks: Sign Language Recognition (SLR) and Sign Language Identification (SLID). The latter task is targeted to identify the signer language, while the former is aimed to translate the signer conversation into tokens (signs). The article addresses the most common datasets used in the literature for the two tasks (static and dynamic datasets that are collected from different corpora) with different contents including numerical, alphabets, words, and sentences from different SLs. It also discusses the devices required to build these datasets, as well as the different preprocessing steps applied before training and testing. The article compares the different approaches and techniques applied on these datasets. It discusses both the vision-based and the data-gloves-based approaches, aiming to analyze and focus on main methods used in vision-based approaches such as hybrid methods and deep learning algorithms. Furthermore, the article presents a graphical depiction and a tabular representation of various SLR approaches.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Open Computer Science
Open Computer Science COMPUTER SCIENCE, THEORY & METHODS-
CiteScore
4.00
自引率
0.00%
发文量
24
审稿时长
25 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信