Posterior-Based Analysis of Spatio-Temporal Features for Sign Language Assessment

IF 2.9 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Neha Tarigopula;Sandrine Tornay;Ozge Mercanoglu Sincan;Richard Bowden;Mathew Magimai.-Doss
{"title":"Posterior-Based Analysis of Spatio-Temporal Features for Sign Language Assessment","authors":"Neha Tarigopula;Sandrine Tornay;Ozge Mercanoglu Sincan;Richard Bowden;Mathew Magimai.-Doss","doi":"10.1109/OJSP.2025.3531781","DOIUrl":null,"url":null,"abstract":"Sign Language conveys information through multiple channels composed of manual (handshape, hand movement) and non-manual (facial expression, mouthing, body posture) components. Sign language assessment involves giving granular feedback to a learner, in terms of correctness of the manual and non-manual components, aiding the learner's progress. Existing methods rely on handcrafted skeleton-based features for hand movement within a KL-HMM framework to identify errors in manual components. However, modern deep learning models offer powerful spatio-temporal representations for videos to represent hand movement and facial expressions. Despite their success in classification tasks, these representations often struggle to attribute errors to specific sources, such as incorrect handshape, improper movement, or incorrect facial expressions. To address this limitation, we leverage and analyze the spatio-temporal representations from Inflated 3D Convolutional Networks (I3D) and integrate them into the KL-HMM framework to assess sign language videos on both manual and non-manual components. By applying masking and cropping techniques, we isolate and evaluate distinct channels of hand movement, and facial expressions using the I3D model and handshape using the CNN-based model. Our approach outperforms traditional methods based on handcrafted features, as validated through experiments on the SMILE-DSGS dataset, and therefore demonstrates that it can enhance the effectiveness of sign language learning tools.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"284-292"},"PeriodicalIF":2.9000,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10845152","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE open journal of signal processing","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10845152/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Sign Language conveys information through multiple channels composed of manual (handshape, hand movement) and non-manual (facial expression, mouthing, body posture) components. Sign language assessment involves giving granular feedback to a learner, in terms of correctness of the manual and non-manual components, aiding the learner's progress. Existing methods rely on handcrafted skeleton-based features for hand movement within a KL-HMM framework to identify errors in manual components. However, modern deep learning models offer powerful spatio-temporal representations for videos to represent hand movement and facial expressions. Despite their success in classification tasks, these representations often struggle to attribute errors to specific sources, such as incorrect handshape, improper movement, or incorrect facial expressions. To address this limitation, we leverage and analyze the spatio-temporal representations from Inflated 3D Convolutional Networks (I3D) and integrate them into the KL-HMM framework to assess sign language videos on both manual and non-manual components. By applying masking and cropping techniques, we isolate and evaluate distinct channels of hand movement, and facial expressions using the I3D model and handshape using the CNN-based model. Our approach outperforms traditional methods based on handcrafted features, as validated through experiments on the SMILE-DSGS dataset, and therefore demonstrates that it can enhance the effectiveness of sign language learning tools.
基于后验的手语评价时空特征分析
手语通过多种渠道传递信息,这些渠道由手势(手形、手部动作)和非手势(面部表情、口齿、身体姿势)组成。手语评估包括根据手动和非手动组件的正确性向学习者提供细粒度的反馈,以帮助学习者的进步。现有的方法依赖于在KL-HMM框架内手工制作的基于骨架的手部运动特征来识别手动组件中的错误。然而,现代深度学习模型为视频提供了强大的时空表征,以表示手部运动和面部表情。尽管它们在分类任务中取得了成功,但这些表示通常难以将错误归因于特定的来源,例如不正确的手形、不正确的动作或不正确的面部表情。为了解决这一限制,我们利用并分析了来自膨胀3D卷积网络(I3D)的时空表征,并将其集成到KL-HMM框架中,以评估手动和非手动组件上的手语视频。通过应用掩蔽和裁剪技术,我们使用I3D模型分离和评估手部运动和面部表情的不同通道,使用基于cnn的模型分离和评估手部形状。我们的方法优于传统的基于手工特征的方法,并通过SMILE-DSGS数据集的实验验证了这一点,因此表明它可以提高手语学习工具的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
5.30
自引率
0.00%
发文量
0
审稿时长
22 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信