Automatic sign language to text translation using MediaPipe and transformer architectures

IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Wesley F. Maia , António M. Lopes , Sergio A. David
{"title":"Automatic sign language to text translation using MediaPipe and transformer architectures","authors":"Wesley F. Maia ,&nbsp;António M. Lopes ,&nbsp;Sergio A. David","doi":"10.1016/j.neucom.2025.130421","DOIUrl":null,"url":null,"abstract":"<div><div>This study presents a transformer-based architecture for translating Sign Language to spoken language text using embeddings of body keypoints, with the mediation of glosses. To the best of our knowledge, this work is the first to successfully leverage body keypoints for Sign Language-to-text translation, achieving comparable performance to baseline models without reducing translation quality. Our approach introduces extensive augmentation techniques for body keypoints, and convolutional keypoint embeddings, and integrates Connectionist Temporal Classification Loss and position encoding for Sign2Gloss translation. For the Gloss2Text stage, we employ fine-tuning of BART, a state-of-the-art transformer model. Evaluation on the Phoenix14T dataset demonstrates that our integrated Sign2Gloss2Text model achieves competitive performance, with BLEU-4 scores that show marginal differences compared to baseline models using pixel embeddings. On the How2Sign dataset, which lacks gloss annotations, direct Sign2Text translation posed challenges, as reflected in lower BLEU-4 scores, highlighting the limitations of gloss-free approaches. This work addresses the narrow domain of the datasets and the unidirectional nature of the translation process while demonstrating the potential of body keypoints for Sign Language Translation. Future work will focus on enhancing the model’s ability to capture nuanced and complex contexts, thereby advancing accessibility and assistive technologies for bridging communication between individuals with hearing impairments and the hearing community.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"642 ","pages":"Article 130421"},"PeriodicalIF":5.5000,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225010938","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

This study presents a transformer-based architecture for translating Sign Language to spoken language text using embeddings of body keypoints, with the mediation of glosses. To the best of our knowledge, this work is the first to successfully leverage body keypoints for Sign Language-to-text translation, achieving comparable performance to baseline models without reducing translation quality. Our approach introduces extensive augmentation techniques for body keypoints, and convolutional keypoint embeddings, and integrates Connectionist Temporal Classification Loss and position encoding for Sign2Gloss translation. For the Gloss2Text stage, we employ fine-tuning of BART, a state-of-the-art transformer model. Evaluation on the Phoenix14T dataset demonstrates that our integrated Sign2Gloss2Text model achieves competitive performance, with BLEU-4 scores that show marginal differences compared to baseline models using pixel embeddings. On the How2Sign dataset, which lacks gloss annotations, direct Sign2Text translation posed challenges, as reflected in lower BLEU-4 scores, highlighting the limitations of gloss-free approaches. This work addresses the narrow domain of the datasets and the unidirectional nature of the translation process while demonstrating the potential of body keypoints for Sign Language Translation. Future work will focus on enhancing the model’s ability to capture nuanced and complex contexts, thereby advancing accessibility and assistive technologies for bridging communication between individuals with hearing impairments and the hearing community.
使用MediaPipe和transformer架构的自动手语到文本翻译
本研究提出了一种基于转换器的结构,利用身体关键点的嵌入,在注释的中介下,将手语翻译成口语文本。据我们所知,这项工作是第一个成功利用肢体关键点进行手语到文本翻译的工作,在不降低翻译质量的情况下实现了与基线模型相当的性能。我们的方法引入了广泛的身体关键点增强技术和卷积关键点嵌入技术,并集成了Connectionist Temporal Classification Loss和Sign2Gloss翻译的位置编码。对于Gloss2Text阶段,我们采用BART的微调,BART是一种最先进的变压器模型。对Phoenix14T数据集的评估表明,我们的集成sign2glo2text模型取得了具有竞争力的性能,与使用像素嵌入的基线模型相比,BLEU-4分数显示出边际差异。在缺乏注释的How2Sign数据集上,直接的Sign2Text翻译带来了挑战,正如较低的BLEU-4分数所反映的那样,突出了无注释方法的局限性。这项工作解决了数据集的狭窄领域和翻译过程的单向性,同时展示了肢体关键点在手语翻译中的潜力。未来的工作将侧重于增强该模型捕捉细微和复杂背景的能力,从而推进可访问性和辅助技术,以弥合听力障碍患者与听力社区之间的沟通。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信