Towards a Transcription System of Sign Language Video Resources via Motion Trajectory Factorisation

M. Borg, K. Camilleri
{"title":"Towards a Transcription System of Sign Language Video Resources via Motion Trajectory Factorisation","authors":"M. Borg, K. Camilleri","doi":"10.1145/3103010.3103020","DOIUrl":null,"url":null,"abstract":"Sign languages are visual languages used by the Deaf community for communication purposes. Whilst recent years have seen a high growth in the quantity of sign language video collections available online, much of this material is hard to access and process due to the lack of associated text-based tagging information and because 'extracting' content directly from video is currently still a very challenging problem. Also limited is the support for the representation and documentation of sign language video resources in terms of sign writing systems. In this paper, we start with a brief survey of existing sign language technologies and we assess their state of the art from the perspective of a sign language digital information processing system. We then introduce our work, focusing on vision-based sign language recognition. We apply the factorisation method to sign language videos in order to factor out the signer's motion from the structure of the hands. We then model the motion of the hands in terms of a weighted combination of linear trajectory basis and apply a set of classifiers on the basis weights for the purpose of recognising meaningful phonological elements of sign language. We demonstrate how these classification results can be used for transcribing sign videos into a written representation for annotation and documentation purposes. Results from our evaluation process indicate the validity of our proposed framework.","PeriodicalId":200469,"journal":{"name":"Proceedings of the 2017 ACM Symposium on Document Engineering","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2017 ACM Symposium on Document Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3103010.3103020","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Sign languages are visual languages used by the Deaf community for communication purposes. Whilst recent years have seen a high growth in the quantity of sign language video collections available online, much of this material is hard to access and process due to the lack of associated text-based tagging information and because 'extracting' content directly from video is currently still a very challenging problem. Also limited is the support for the representation and documentation of sign language video resources in terms of sign writing systems. In this paper, we start with a brief survey of existing sign language technologies and we assess their state of the art from the perspective of a sign language digital information processing system. We then introduce our work, focusing on vision-based sign language recognition. We apply the factorisation method to sign language videos in order to factor out the signer's motion from the structure of the hands. We then model the motion of the hands in terms of a weighted combination of linear trajectory basis and apply a set of classifiers on the basis weights for the purpose of recognising meaningful phonological elements of sign language. We demonstrate how these classification results can be used for transcribing sign videos into a written representation for annotation and documentation purposes. Results from our evaluation process indicate the validity of our proposed framework.
基于运动轨迹分解的手语视频资源转录系统研究
手语是聋人社区用于交流目的的视觉语言。虽然近年来在线上的手语视频数量增长迅速,但由于缺乏相关的基于文本的标记信息,并且直接从视频中“提取”内容目前仍然是一个非常具有挑战性的问题,因此许多此类材料难以访问和处理。手语书写系统对手语视频资源的表示和记录的支持也受到限制。本文首先简要介绍了现有的手语技术,并从手语数字信息处理系统的角度对其现状进行了评价。然后我们介绍了我们的工作,重点是基于视觉的手语识别。我们将因子分解方法应用于手语视频,以便从手部结构中提取出签名者的运动。然后,我们根据线性轨迹基础的加权组合对手部运动进行建模,并在基础权重上应用一组分类器,以识别手语的有意义语音元素。我们演示了如何将这些分类结果用于将标识视频转录成用于注释和文档目的的书面表示。评估过程的结果表明我们提出的框架是有效的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信