Sharath S R , Suraj S , Abishek Kumar G M , P Siddharth , Nalinadevi K
{"title":"Sign Language To Sign Language Translator","authors":"Sharath S R , Suraj S , Abishek Kumar G M , P Siddharth , Nalinadevi K","doi":"10.1016/j.procs.2025.03.213","DOIUrl":null,"url":null,"abstract":"<div><div>Sign languages differ between countries, regions, and even communities within the same country, leading to communication barriers when interacting with deaf individuals from different linguistic backgrounds. This paper introduces a novel approach for sign language-to-sign language translation, enabling seamless communication across diverse deaf communities. The proposed model translates source sign language images to avatar sign images of the target language by utilizing separate key point estimation models for recognizing static sign elements that include handshape, orientation and position, achieving an accuracy of 88%. The research work uses HamNoSys as the intermediate representation to capture the essential elements of signs in the translation process. The HamNoSys sequence migration task is accomplished using the Seq2Seq model with a BLEU-1 score of 0.85. The target HamNoSys sequences are converted to machine-readable format (SiGML) to render the 3D avatar sign images. Experiments are done using static signs from three distinct sign languages.</div></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"260 ","pages":"Pages 373-381"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Procedia Computer Science","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1877050925009573","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Sign languages differ between countries, regions, and even communities within the same country, leading to communication barriers when interacting with deaf individuals from different linguistic backgrounds. This paper introduces a novel approach for sign language-to-sign language translation, enabling seamless communication across diverse deaf communities. The proposed model translates source sign language images to avatar sign images of the target language by utilizing separate key point estimation models for recognizing static sign elements that include handshape, orientation and position, achieving an accuracy of 88%. The research work uses HamNoSys as the intermediate representation to capture the essential elements of signs in the translation process. The HamNoSys sequence migration task is accomplished using the Seq2Seq model with a BLEU-1 score of 0.85. The target HamNoSys sequences are converted to machine-readable format (SiGML) to render the 3D avatar sign images. Experiments are done using static signs from three distinct sign languages.