Sign Words Annotation Assistance Using Japanese Sign Language Words Recognition

Natsuki Takayama, Hiroki Takahashi
{"title":"Sign Words Annotation Assistance Using Japanese Sign Language Words Recognition","authors":"Natsuki Takayama, Hiroki Takahashi","doi":"10.1109/CW.2018.00048","DOIUrl":null,"url":null,"abstract":"A Japanese sign language corpus is essential to activate analysis and recognition research of Japanese sign language. It requires collecting large scale of video data and annotating information to build a sign language corpus. Generally, building a sign language corpus is tedious work, and assistance is necessary. This paper describes one of the assistance methods for annotation tasks of sign words using Japanese sign language words recognition. The words recognition extracts sign features from a video, segments it into meaningful units, and annotates word labels to them automatically. At this time, the user's annotation tasks can be reduced from the full-manual work to confirmation and correction of the annotation. The proposed sign words recognition is composed of body-parts tracking, feature extraction, and words classification. The five types of approaches including i) feature fusion and ii) multi-stream HMM to handle the multiple body-parts are applied and compared. We build a video database of Japanese sign language words and a manual annotation interface to evaluate the proposed method. The database includes 92 Japanese sign language words which are signed by ten native signers. The total number of videos is 4,590, and 3,900 videos of 78 words except for recording and sign errors are used for the evaluation. The classification accuracies were 75.88% and 93.35% in the signer and trial opened conditions, respectively, when the parts-based feature fusion and multi-stream HMM using relative weights for body-parts are employed. Moreover, the expected work reduction ratio of annotation tasks using the interface was 38.01%.","PeriodicalId":388539,"journal":{"name":"2018 International Conference on Cyberworlds (CW)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 International Conference on Cyberworlds (CW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CW.2018.00048","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

A Japanese sign language corpus is essential to activate analysis and recognition research of Japanese sign language. It requires collecting large scale of video data and annotating information to build a sign language corpus. Generally, building a sign language corpus is tedious work, and assistance is necessary. This paper describes one of the assistance methods for annotation tasks of sign words using Japanese sign language words recognition. The words recognition extracts sign features from a video, segments it into meaningful units, and annotates word labels to them automatically. At this time, the user's annotation tasks can be reduced from the full-manual work to confirmation and correction of the annotation. The proposed sign words recognition is composed of body-parts tracking, feature extraction, and words classification. The five types of approaches including i) feature fusion and ii) multi-stream HMM to handle the multiple body-parts are applied and compared. We build a video database of Japanese sign language words and a manual annotation interface to evaluate the proposed method. The database includes 92 Japanese sign language words which are signed by ten native signers. The total number of videos is 4,590, and 3,900 videos of 78 words except for recording and sign errors are used for the evaluation. The classification accuracies were 75.88% and 93.35% in the signer and trial opened conditions, respectively, when the parts-based feature fusion and multi-stream HMM using relative weights for body-parts are employed. Moreover, the expected work reduction ratio of annotation tasks using the interface was 38.01%.
使用日本手语文字识别的手语文字注释协助
日语手语语料库是激活日语手语分析和识别研究的必要条件。建立手语语料库需要收集大量的视频数据和标注信息。一般来说,建立一个手语语料库是一项繁琐的工作,并且需要帮助。本文介绍了一种利用日语手语词识别辅助手语词标注任务的方法。单词识别从视频中提取符号特征,将其分割成有意义的单元,并自动为其标注单词标签。此时,用户的标注任务可以从全手工的工作减少到标注的确认和更正。提出的手势语识别由身体部位跟踪、特征提取和词分类三个部分组成。应用并比较了i)特征融合和ii)多流HMM五种处理多身体部位的方法。为了验证该方法的有效性,我们建立了一个日语手语词视频库和一个手动标注界面。该数据库包括92个日语手语单词,由10个本地签名者签名。视频总数为4590个,除记录和符号错误外,评价使用了78个单词的3900个视频。采用基于部位的特征融合和基于身体部位相对权值的多流HMM,在signer和trial open条件下的分类准确率分别为75.88%和93.35%。此外,使用该接口的标注任务的预期工作量减少率为38.01%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信