Rain Kristine B. Cabigting, Carl James U. Grantoza, Leonardo D. Valiente, Ericson D. Dimaunahan
{"title":"基于Jetson纳米的菲律宾手语双向交流系统,使用LSTM深度学习模型,用于残疾人和聋哑人","authors":"Rain Kristine B. Cabigting, Carl James U. Grantoza, Leonardo D. Valiente, Ericson D. Dimaunahan","doi":"10.1109/RAAI56146.2022.10092971","DOIUrl":null,"url":null,"abstract":"Communication is the foundation of what it is to be human. The majority of human communication is reliant on sounds. However, it is not the sole natural means of communication; other people employ alternative ways. One of which is the Deaf community’s language. Communication between the Deaf-mute community and hearing or able individuals is one of the various challenges the two parties encounter. In the Philippines, around 70% of the Filipino Deaf Community utilizes Filipino Sign Language (FSL) as their primary language, whereas some hearing persons may be illiterate in the Deaf community’s native language. With the given situation, this study created a twoway communication device using Jetson Nano, covering the proper translation of FSL to text and speech and the conversion of input speech into text. Ten (10) dynamic FSL gestures are considered in this study. The device used LSTM Deep Learning Model and MediaPipe to recognize the FSL gestures, then convert them into speech through Google Text-to-Speech (gTTS) API. The device also converts speech to text using Google Speech-to-Text (gSTT) API. Sixty (60) trials of a two-way conversation between a deaf-mute and a hearing person are performed. By conducting a Test of Proportion for a Two-Way Conversation, it has revealed that the prototype exceeded the standard value of 91.11%, which garnered an accuracy of 93.33%, rendering the device highly effective and reliable as a means of communication between a deaf-mute person and a fully able one.","PeriodicalId":190255,"journal":{"name":"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Jetson Nano-Based Two-Way Communication System with Filipino Sign Language Recognition Using LSTM Deep Learning Model for Able and Deaf-Mute Persons\",\"authors\":\"Rain Kristine B. Cabigting, Carl James U. Grantoza, Leonardo D. Valiente, Ericson D. Dimaunahan\",\"doi\":\"10.1109/RAAI56146.2022.10092971\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Communication is the foundation of what it is to be human. The majority of human communication is reliant on sounds. However, it is not the sole natural means of communication; other people employ alternative ways. One of which is the Deaf community’s language. Communication between the Deaf-mute community and hearing or able individuals is one of the various challenges the two parties encounter. In the Philippines, around 70% of the Filipino Deaf Community utilizes Filipino Sign Language (FSL) as their primary language, whereas some hearing persons may be illiterate in the Deaf community’s native language. With the given situation, this study created a twoway communication device using Jetson Nano, covering the proper translation of FSL to text and speech and the conversion of input speech into text. Ten (10) dynamic FSL gestures are considered in this study. The device used LSTM Deep Learning Model and MediaPipe to recognize the FSL gestures, then convert them into speech through Google Text-to-Speech (gTTS) API. The device also converts speech to text using Google Speech-to-Text (gSTT) API. Sixty (60) trials of a two-way conversation between a deaf-mute and a hearing person are performed. By conducting a Test of Proportion for a Two-Way Conversation, it has revealed that the prototype exceeded the standard value of 91.11%, which garnered an accuracy of 93.33%, rendering the device highly effective and reliable as a means of communication between a deaf-mute person and a fully able one.\",\"PeriodicalId\":190255,\"journal\":{\"name\":\"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)\",\"volume\":\"69 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/RAAI56146.2022.10092971\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RAAI56146.2022.10092971","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Jetson Nano-Based Two-Way Communication System with Filipino Sign Language Recognition Using LSTM Deep Learning Model for Able and Deaf-Mute Persons
Communication is the foundation of what it is to be human. The majority of human communication is reliant on sounds. However, it is not the sole natural means of communication; other people employ alternative ways. One of which is the Deaf community’s language. Communication between the Deaf-mute community and hearing or able individuals is one of the various challenges the two parties encounter. In the Philippines, around 70% of the Filipino Deaf Community utilizes Filipino Sign Language (FSL) as their primary language, whereas some hearing persons may be illiterate in the Deaf community’s native language. With the given situation, this study created a twoway communication device using Jetson Nano, covering the proper translation of FSL to text and speech and the conversion of input speech into text. Ten (10) dynamic FSL gestures are considered in this study. The device used LSTM Deep Learning Model and MediaPipe to recognize the FSL gestures, then convert them into speech through Google Text-to-Speech (gTTS) API. The device also converts speech to text using Google Speech-to-Text (gSTT) API. Sixty (60) trials of a two-way conversation between a deaf-mute and a hearing person are performed. By conducting a Test of Proportion for a Two-Way Conversation, it has revealed that the prototype exceeded the standard value of 91.11%, which garnered an accuracy of 93.33%, rendering the device highly effective and reliable as a means of communication between a deaf-mute person and a fully able one.