Hao Zhang , Zenghui Liu , Zhihang Yan , Songrui Guo , ChunMing Gao , Xiyao Liu
{"title":"Chinese sign language recognition and translation with virtual digital human dataset","authors":"Hao Zhang , Zenghui Liu , Zhihang Yan , Songrui Guo , ChunMing Gao , Xiyao Liu","doi":"10.1016/j.displa.2025.102989","DOIUrl":null,"url":null,"abstract":"<div><div>Sign language recognition and translation are crucial for communication among individuals who are deaf or mute. Deep learning methods have advanced sign language tasks, surpassing traditional techniques in accuracy through autonomous data learning. However, the scarcity of annotated sign language datasets limits the potential of these methods in practical applications. To address this, we propose using digital twin technology to build a virtual human system at the word level, which can automatically generate sign language sentences, eliminating human input, and creating numerous sign language data pairs for efficient virtual-to-real transfer. To enhance the generalization of virtual sign language data and mitigate the bias between virtual and real data, we designed novel embedding representations and augmentation methods based on skeletal information. We also established a multi-task learning framework and a pose attention module for sign language recognition and translation. Our experiments confirm the efficacy of our approach, yielding state-of-the-art results in recognition and translation.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102989"},"PeriodicalIF":3.7000,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938225000265","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Sign language recognition and translation are crucial for communication among individuals who are deaf or mute. Deep learning methods have advanced sign language tasks, surpassing traditional techniques in accuracy through autonomous data learning. However, the scarcity of annotated sign language datasets limits the potential of these methods in practical applications. To address this, we propose using digital twin technology to build a virtual human system at the word level, which can automatically generate sign language sentences, eliminating human input, and creating numerous sign language data pairs for efficient virtual-to-real transfer. To enhance the generalization of virtual sign language data and mitigate the bias between virtual and real data, we designed novel embedding representations and augmentation methods based on skeletal information. We also established a multi-task learning framework and a pose attention module for sign language recognition and translation. Our experiments confirm the efficacy of our approach, yielding state-of-the-art results in recognition and translation.
期刊介绍:
Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface.
Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.