{"title":"Towards the automatic generation of sign language gestures from 2D images","authors":"Ibtissem Talbi, Oussama El Ghoul, M. Jemni","doi":"10.1109/ICTA.2015.7426872","DOIUrl":null,"url":null,"abstract":"This paper presents a novel approach for generating sign language gestures in the form of video sequences. The first focus of our work is to propose a new approach to simulate the facial expressions. The proposed approach consists of two parts: the first one consists of creating the in between images and this by combining image deformation model and a color change. The second part is responsible to generate a video sequence from the obtained images. In our approach we will treat the case of puffing of the cheeks, so, we calculate the color of each pixel of an image, and as a first step we used the Phong illumination model. The main goal of this work is to offer the deaf people new opportunities to access information and communicate via video sequences.","PeriodicalId":375443,"journal":{"name":"2015 5th International Conference on Information & Communication Technology and Accessibility (ICTA)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 5th International Conference on Information & Communication Technology and Accessibility (ICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTA.2015.7426872","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
This paper presents a novel approach for generating sign language gestures in the form of video sequences. The first focus of our work is to propose a new approach to simulate the facial expressions. The proposed approach consists of two parts: the first one consists of creating the in between images and this by combining image deformation model and a color change. The second part is responsible to generate a video sequence from the obtained images. In our approach we will treat the case of puffing of the cheeks, so, we calculate the color of each pixel of an image, and as a first step we used the Phong illumination model. The main goal of this work is to offer the deaf people new opportunities to access information and communicate via video sequences.