Liang Jinghua, Yin Baocai, Wang Lichun, Kong De-hui, Wang Yufei
{"title":"中国手语动画的多种手势生成","authors":"Liang Jinghua, Yin Baocai, Wang Lichun, Kong De-hui, Wang Yufei","doi":"10.1109/ICDH.2012.44","DOIUrl":null,"url":null,"abstract":"In concatenating-based sign language synthesis, sign language words are usually captured in context free environment to construct database, and then the words corresponding with the input nature text are selected from the database and concatenated into sentences for synthesis. One problem here is that sign language words would exhibit diversified expression while embedded in different context. Including all kinds of expression for all sign language words is infeasible due to the difficulty in capturing, reserving and retrieving. Based on formatted sign language description and nonlinear magnification, this paper proposes a novel diversified gesture generation method for sign language expression in different context. Our method trains a computing model for diversified sign language gesture in \"emphasis\" context based on the data in the neutral state and the biggest intensity of emphasis collected from the teacher of deaf, and generates sign language gesture with any intensity of \"emphasis\" from the model and the data in neutral state, which makes the synthesized sign language animation more accurate and intelligible.","PeriodicalId":308799,"journal":{"name":"2012 Fourth International Conference on Digital Home","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Diversified Gesture Generation for Chinese Sign Language Animation\",\"authors\":\"Liang Jinghua, Yin Baocai, Wang Lichun, Kong De-hui, Wang Yufei\",\"doi\":\"10.1109/ICDH.2012.44\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In concatenating-based sign language synthesis, sign language words are usually captured in context free environment to construct database, and then the words corresponding with the input nature text are selected from the database and concatenated into sentences for synthesis. One problem here is that sign language words would exhibit diversified expression while embedded in different context. Including all kinds of expression for all sign language words is infeasible due to the difficulty in capturing, reserving and retrieving. Based on formatted sign language description and nonlinear magnification, this paper proposes a novel diversified gesture generation method for sign language expression in different context. Our method trains a computing model for diversified sign language gesture in \\\"emphasis\\\" context based on the data in the neutral state and the biggest intensity of emphasis collected from the teacher of deaf, and generates sign language gesture with any intensity of \\\"emphasis\\\" from the model and the data in neutral state, which makes the synthesized sign language animation more accurate and intelligible.\",\"PeriodicalId\":308799,\"journal\":{\"name\":\"2012 Fourth International Conference on Digital Home\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-11-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 Fourth International Conference on Digital Home\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDH.2012.44\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 Fourth International Conference on Digital Home","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDH.2012.44","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Diversified Gesture Generation for Chinese Sign Language Animation
In concatenating-based sign language synthesis, sign language words are usually captured in context free environment to construct database, and then the words corresponding with the input nature text are selected from the database and concatenated into sentences for synthesis. One problem here is that sign language words would exhibit diversified expression while embedded in different context. Including all kinds of expression for all sign language words is infeasible due to the difficulty in capturing, reserving and retrieving. Based on formatted sign language description and nonlinear magnification, this paper proposes a novel diversified gesture generation method for sign language expression in different context. Our method trains a computing model for diversified sign language gesture in "emphasis" context based on the data in the neutral state and the biggest intensity of emphasis collected from the teacher of deaf, and generates sign language gesture with any intensity of "emphasis" from the model and the data in neutral state, which makes the synthesized sign language animation more accurate and intelligible.