Helcy D. Alon, Michael Angelo D. Ligayo, Mark P. Melegrito, Christopher Franco Cunanan, Edgar E. Uy II
{"title":"Deep-Hand:一种使用美国字母识别手语的深度推理视觉方法","authors":"Helcy D. Alon, Michael Angelo D. Ligayo, Mark P. Melegrito, Christopher Franco Cunanan, Edgar E. Uy II","doi":"10.1109/ICCIKE51210.2021.9410803","DOIUrl":null,"url":null,"abstract":"Sign-Language is to help people with hearing or speaking disabilities who are not able to communicate well with other people. Communicating with deaf people is a challenge for some speakers and people who do not know sign language. What the study proposed is to help people with such disabilities using the American Sign-Language with the corresponding hand gesture. Deaf individuals will be able to communicate or interact with other people conveniently. The study proposed hand gesture or hand sign language detection trained by using the YOLOv3 algorithm that aims to detect hand gestures or hand sign language that can recognize its equivalent letter alphabet. The study tools such as LabelImg for annotating the data set, categorizing each image of hand gestures based on their equivalent letter alphabet. In this study, Model 18 with 95.1804% training accuracy, 90.8242% validation accuracy, and mAP of 0.8275 is used for the final testing. As video with different hand gestures is presented, the results of every hand gesture detected range over 90%.","PeriodicalId":254711,"journal":{"name":"2021 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Deep-Hand: A Deep Inference Vision Approach of Recognizing a Hand Sign Language using American Alphabet\",\"authors\":\"Helcy D. Alon, Michael Angelo D. Ligayo, Mark P. Melegrito, Christopher Franco Cunanan, Edgar E. Uy II\",\"doi\":\"10.1109/ICCIKE51210.2021.9410803\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Sign-Language is to help people with hearing or speaking disabilities who are not able to communicate well with other people. Communicating with deaf people is a challenge for some speakers and people who do not know sign language. What the study proposed is to help people with such disabilities using the American Sign-Language with the corresponding hand gesture. Deaf individuals will be able to communicate or interact with other people conveniently. The study proposed hand gesture or hand sign language detection trained by using the YOLOv3 algorithm that aims to detect hand gestures or hand sign language that can recognize its equivalent letter alphabet. The study tools such as LabelImg for annotating the data set, categorizing each image of hand gestures based on their equivalent letter alphabet. In this study, Model 18 with 95.1804% training accuracy, 90.8242% validation accuracy, and mAP of 0.8275 is used for the final testing. As video with different hand gestures is presented, the results of every hand gesture detected range over 90%.\",\"PeriodicalId\":254711,\"journal\":{\"name\":\"2021 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE)\",\"volume\":\"53 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-03-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCIKE51210.2021.9410803\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCIKE51210.2021.9410803","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep-Hand: A Deep Inference Vision Approach of Recognizing a Hand Sign Language using American Alphabet
Sign-Language is to help people with hearing or speaking disabilities who are not able to communicate well with other people. Communicating with deaf people is a challenge for some speakers and people who do not know sign language. What the study proposed is to help people with such disabilities using the American Sign-Language with the corresponding hand gesture. Deaf individuals will be able to communicate or interact with other people conveniently. The study proposed hand gesture or hand sign language detection trained by using the YOLOv3 algorithm that aims to detect hand gestures or hand sign language that can recognize its equivalent letter alphabet. The study tools such as LabelImg for annotating the data set, categorizing each image of hand gestures based on their equivalent letter alphabet. In this study, Model 18 with 95.1804% training accuracy, 90.8242% validation accuracy, and mAP of 0.8275 is used for the final testing. As video with different hand gestures is presented, the results of every hand gesture detected range over 90%.