{"title":"Low Cost Full Duplex Wireless Glove for Static and Trajectory Based American Sign Language Translation to Multimedia Output","authors":"Dhananjai Bajpai, V. Mishra","doi":"10.1109/CICN.2016.133","DOIUrl":null,"url":null,"abstract":"This paper provides a full duplex, low cost, portable and efficient applicative architecture for receiving, transmitting and translating static or trajectory based American Sign Language (ASL) gestures into different multimedia outputs. It consists of one six axis motion processing unit, five customized optical bend sensors and three contact sensors which are used to send a stream of digital sensory data to AVR-328 based Central Processing Unit (CPU). CPU uses Kalman filter to stabilize real time pitch, roll values and uses K-Nearest Neighbor (KNN) algorithm to find nearest neighbors for incoming sensory stream. When input data stream successfully satisfies specific set of neighbors, then output is produced in terms of text and speech using PWM based Text to Speech (TTS) algorithm. Serial port of the CPU is used to wirelessly send this output to various secondary computation devices for displaying multimedia outputs, whereas messages from these secondary devices are received over the same protocol, which are directly displayed on LCD of glove. Further, this architecture is programmed to create, or customize the database and final outputs.","PeriodicalId":189849,"journal":{"name":"2016 8th International Conference on Computational Intelligence and Communication Networks (CICN)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 8th International Conference on Computational Intelligence and Communication Networks (CICN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CICN.2016.133","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
This paper provides a full duplex, low cost, portable and efficient applicative architecture for receiving, transmitting and translating static or trajectory based American Sign Language (ASL) gestures into different multimedia outputs. It consists of one six axis motion processing unit, five customized optical bend sensors and three contact sensors which are used to send a stream of digital sensory data to AVR-328 based Central Processing Unit (CPU). CPU uses Kalman filter to stabilize real time pitch, roll values and uses K-Nearest Neighbor (KNN) algorithm to find nearest neighbors for incoming sensory stream. When input data stream successfully satisfies specific set of neighbors, then output is produced in terms of text and speech using PWM based Text to Speech (TTS) algorithm. Serial port of the CPU is used to wirelessly send this output to various secondary computation devices for displaying multimedia outputs, whereas messages from these secondary devices are received over the same protocol, which are directly displayed on LCD of glove. Further, this architecture is programmed to create, or customize the database and final outputs.