建立阿拉伯手语(ArSL)制作的多模态数据集

IF 5.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Samah Abbas , Dimah Alahmadi , Hassanin Al-Barhamtoshy
{"title":"建立阿拉伯手语(ArSL)制作的多模态数据集","authors":"Samah Abbas ,&nbsp;Dimah Alahmadi ,&nbsp;Hassanin Al-Barhamtoshy","doi":"10.1016/j.jksuci.2024.102165","DOIUrl":null,"url":null,"abstract":"<div><p>This paper addresses the potential of Arabic Sign Language (ArSL) recognition systems to facilitate direct communication and enhance social engagement between deaf and non-deaf. Specifically, we focus on the domain of religion to address the lack of accessible religious content for the deaf community. We propose a multimodal architecture framework and develop a novel dataset for ArSL production. The dataset comprises 1950 audio signals with corresponding 131 texts, including words and phrases, and 262 ArSL videos. These videos were recorded by two expert signers and annotated using ELAN based on gloss representation. To evaluate ArSL videos, we employ Cosine similarities and mode distances based on MobileNetV2 and Euclidean distance based on MediaPipe. Additionally, we implement Jac card Similarity to evaluate the gloss representation, resulting in an overall similarity score of 85% between the glosses of the two ArSL videos. The evaluation highlights the complexity of creating an ArSL video corpus and reveals slight differences between the two videos. The findings emphasize the need for careful annotation and representation of ArSL videos to ensure accurate recognition and understanding. Overall, it contributes to bridging the gap in accessible religious content for deaf community by developing a multimodal framework and a comprehensive ArSL dataset.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2000,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002544/pdfft?md5=301cc3d87bf22d8e207fb35edd191aea&pid=1-s2.0-S1319157824002544-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Establishing a multimodal dataset for Arabic Sign Language (ArSL) production\",\"authors\":\"Samah Abbas ,&nbsp;Dimah Alahmadi ,&nbsp;Hassanin Al-Barhamtoshy\",\"doi\":\"10.1016/j.jksuci.2024.102165\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>This paper addresses the potential of Arabic Sign Language (ArSL) recognition systems to facilitate direct communication and enhance social engagement between deaf and non-deaf. Specifically, we focus on the domain of religion to address the lack of accessible religious content for the deaf community. We propose a multimodal architecture framework and develop a novel dataset for ArSL production. The dataset comprises 1950 audio signals with corresponding 131 texts, including words and phrases, and 262 ArSL videos. These videos were recorded by two expert signers and annotated using ELAN based on gloss representation. To evaluate ArSL videos, we employ Cosine similarities and mode distances based on MobileNetV2 and Euclidean distance based on MediaPipe. Additionally, we implement Jac card Similarity to evaluate the gloss representation, resulting in an overall similarity score of 85% between the glosses of the two ArSL videos. The evaluation highlights the complexity of creating an ArSL video corpus and reveals slight differences between the two videos. The findings emphasize the need for careful annotation and representation of ArSL videos to ensure accurate recognition and understanding. Overall, it contributes to bridging the gap in accessible religious content for deaf community by developing a multimodal framework and a comprehensive ArSL dataset.</p></div>\",\"PeriodicalId\":48547,\"journal\":{\"name\":\"Journal of King Saud University-Computer and Information Sciences\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.2000,\"publicationDate\":\"2024-08-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S1319157824002544/pdfft?md5=301cc3d87bf22d8e207fb35edd191aea&pid=1-s2.0-S1319157824002544-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of King Saud University-Computer and Information Sciences\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1319157824002544\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of King Saud University-Computer and Information Sciences","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1319157824002544","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

本文探讨了阿拉伯语手语 (ArSL) 识别系统在促进聋人与非聋人之间的直接交流和社会参与方面的潜力。具体而言,我们将重点放在宗教领域,以解决聋人群体缺乏无障碍宗教内容的问题。我们提出了一个多模态架构框架,并开发了一个新颖的 ArSL 生成数据集。该数据集包括 1950 个音频信号和相应的 131 个文本(包括单词和短语),以及 262 个 ArSL 视频。这些视频由两位专家手语者录制,并使用基于词汇表的 ELAN 进行注释。为了评估 ArSL 视频,我们采用了基于 MobileNetV2 的余弦相似度和模式距离,以及基于 MediaPipe 的欧氏距离。此外,我们还采用了 Jac card Similarity 来评估词汇表,结果发现两段 ArSL 视频的词汇表之间的总体相似度达到了 85%。评估结果凸显了创建 ArSL 视频语料库的复杂性,并揭示了两段视频之间的细微差别。评估结果强调了对 ArSL 视频进行仔细标注和表示的必要性,以确保准确的识别和理解。总之,通过开发一个多模态框架和一个全面的 ArSL 数据集,该研究有助于缩小聋人社区在无障碍宗教内容方面的差距。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Establishing a multimodal dataset for Arabic Sign Language (ArSL) production

This paper addresses the potential of Arabic Sign Language (ArSL) recognition systems to facilitate direct communication and enhance social engagement between deaf and non-deaf. Specifically, we focus on the domain of religion to address the lack of accessible religious content for the deaf community. We propose a multimodal architecture framework and develop a novel dataset for ArSL production. The dataset comprises 1950 audio signals with corresponding 131 texts, including words and phrases, and 262 ArSL videos. These videos were recorded by two expert signers and annotated using ELAN based on gloss representation. To evaluate ArSL videos, we employ Cosine similarities and mode distances based on MobileNetV2 and Euclidean distance based on MediaPipe. Additionally, we implement Jac card Similarity to evaluate the gloss representation, resulting in an overall similarity score of 85% between the glosses of the two ArSL videos. The evaluation highlights the complexity of creating an ArSL video corpus and reveals slight differences between the two videos. The findings emphasize the need for careful annotation and representation of ArSL videos to ensure accurate recognition and understanding. Overall, it contributes to bridging the gap in accessible religious content for deaf community by developing a multimodal framework and a comprehensive ArSL dataset.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
10.50
自引率
8.70%
发文量
656
审稿时长
29 days
期刊介绍: In 2022 the Journal of King Saud University - Computer and Information Sciences will become an author paid open access journal. Authors who submit their manuscript after October 31st 2021 will be asked to pay an Article Processing Charge (APC) after acceptance of their paper to make their work immediately, permanently, and freely accessible to all. The Journal of King Saud University Computer and Information Sciences is a refereed, international journal that covers all aspects of both foundations of computer and its practical applications.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信