手动和非手动手语分析中基于交叉注意力的影响模型

Lipisha Chaudhary, Fei Xu, Ifeoma Nwogu
{"title":"手动和非手动手语分析中基于交叉注意力的影响模型","authors":"Lipisha Chaudhary, Fei Xu, Ifeoma Nwogu","doi":"arxiv-2409.08162","DOIUrl":null,"url":null,"abstract":"Both manual (relating to the use of hands) and non-manual markers (NMM), such\nas facial expressions or mouthing cues, are important for providing the\ncomplete meaning of phrases in American Sign Language (ASL). Efforts have been\nmade in advancing sign language to spoken/written language understanding, but\nmost of these have primarily focused on manual features only. In this work,\nusing advanced neural machine translation methods, we examine and report on the\nextent to which facial expressions contribute to understanding sign language\nphrases. We present a sign language translation architecture consisting of\ntwo-stream encoders, with one encoder handling the face and the other handling\nthe upper body (with hands). We propose a new parallel cross-attention decoding\nmechanism that is useful for quantifying the influence of each input modality\non the output. The two streams from the encoder are directed simultaneously to\ndifferent attention stacks in the decoder. Examining the properties of the\nparallel cross-attention weights allows us to analyze the importance of facial\nmarkers compared to body and hand features during a translating task.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cross-Attention Based Influence Model for Manual and Nonmanual Sign Language Analysis\",\"authors\":\"Lipisha Chaudhary, Fei Xu, Ifeoma Nwogu\",\"doi\":\"arxiv-2409.08162\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Both manual (relating to the use of hands) and non-manual markers (NMM), such\\nas facial expressions or mouthing cues, are important for providing the\\ncomplete meaning of phrases in American Sign Language (ASL). Efforts have been\\nmade in advancing sign language to spoken/written language understanding, but\\nmost of these have primarily focused on manual features only. In this work,\\nusing advanced neural machine translation methods, we examine and report on the\\nextent to which facial expressions contribute to understanding sign language\\nphrases. We present a sign language translation architecture consisting of\\ntwo-stream encoders, with one encoder handling the face and the other handling\\nthe upper body (with hands). We propose a new parallel cross-attention decoding\\nmechanism that is useful for quantifying the influence of each input modality\\non the output. The two streams from the encoder are directed simultaneously to\\ndifferent attention stacks in the decoder. Examining the properties of the\\nparallel cross-attention weights allows us to analyze the importance of facial\\nmarkers compared to body and hand features during a translating task.\",\"PeriodicalId\":501130,\"journal\":{\"name\":\"arXiv - CS - Computer Vision and Pattern Recognition\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computer Vision and Pattern Recognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.08162\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08162","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

手动标记(与手的使用有关)和非手动标记(NMM),如面部表情或口型提示,对于提供美国手语(ASL)中短语的完整含义都很重要。人们一直在努力将手语提升到口语/书面语理解的水平,但其中大部分都只侧重于人工特征。在这项工作中,我们使用先进的神经机器翻译方法,研究并报告了面部表情对理解手语短语的贡献程度。我们提出了一种由两个流编码器组成的手语翻译架构,其中一个编码器处理面部,另一个处理上半身(包括手)。我们提出了一种新的并行交叉注意力解码机制,可用于量化每种输入模式对输出的影响。来自编码器的两个数据流同时进入解码器中的不同注意堆栈。通过研究并行交叉注意力权重的特性,我们可以分析在翻译任务中面部标记与身体和手部特征相比的重要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Cross-Attention Based Influence Model for Manual and Nonmanual Sign Language Analysis
Both manual (relating to the use of hands) and non-manual markers (NMM), such as facial expressions or mouthing cues, are important for providing the complete meaning of phrases in American Sign Language (ASL). Efforts have been made in advancing sign language to spoken/written language understanding, but most of these have primarily focused on manual features only. In this work, using advanced neural machine translation methods, we examine and report on the extent to which facial expressions contribute to understanding sign language phrases. We present a sign language translation architecture consisting of two-stream encoders, with one encoder handling the face and the other handling the upper body (with hands). We propose a new parallel cross-attention decoding mechanism that is useful for quantifying the influence of each input modality on the output. The two streams from the encoder are directed simultaneously to different attention stacks in the decoder. Examining the properties of the parallel cross-attention weights allows us to analyze the importance of facial markers compared to body and hand features during a translating task.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信