Manuel Bautista Garcia, Teodoro Feria Revano, A. C. Yabut
{"title":"Hand Alphabet Recognition for Dactylology Conversion to English Print Using Streaming Video Segmentation","authors":"Manuel Bautista Garcia, Teodoro Feria Revano, A. C. Yabut","doi":"10.1145/3479162.3479169","DOIUrl":null,"url":null,"abstract":": Assistive technologies gained traction in the medical field over the last few decades. Novel approaches have been developed to support people with disability to communicate effectively. However, little research has been conducted on the other side of the coin, that is, assistive technologies to help people who do not have a disability to understand the language of the disabled. This study describes the early development of a hand alphabet recognition that intends to accomplish a functioning dactylology conversion from sign language to English print in a live streaming video. Through video analysis, each frame is processed using a segmentation technique to partition it into different segments (e.g., pixels of hand gestures). The dactylology conversion algorithm was implemented in a mobile application where users can watch videos containing an on-screen sign language interpreter and understand fingerspelling used as communication by hearing- and speech-impaired people. Through the sample dataset of 13 videos of American Sign Language manually collected ( n = 10) and recorded ( n = 3), the application was tested for its accuracy in detecting the alphabet in a video (94.16%), and the correctness of conversion of the detected alphabet into English print (89.65%). This study contributes to the list of existing novel approaches that aim to promote social positive effects as well as improve the quality of life for both disabled and all the people they socialize with.","PeriodicalId":132108,"journal":{"name":"International Conference on Computer and Communications Management","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Computer and Communications Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3479162.3479169","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
: Assistive technologies gained traction in the medical field over the last few decades. Novel approaches have been developed to support people with disability to communicate effectively. However, little research has been conducted on the other side of the coin, that is, assistive technologies to help people who do not have a disability to understand the language of the disabled. This study describes the early development of a hand alphabet recognition that intends to accomplish a functioning dactylology conversion from sign language to English print in a live streaming video. Through video analysis, each frame is processed using a segmentation technique to partition it into different segments (e.g., pixels of hand gestures). The dactylology conversion algorithm was implemented in a mobile application where users can watch videos containing an on-screen sign language interpreter and understand fingerspelling used as communication by hearing- and speech-impaired people. Through the sample dataset of 13 videos of American Sign Language manually collected ( n = 10) and recorded ( n = 3), the application was tested for its accuracy in detecting the alphabet in a video (94.16%), and the correctness of conversion of the detected alphabet into English print (89.65%). This study contributes to the list of existing novel approaches that aim to promote social positive effects as well as improve the quality of life for both disabled and all the people they socialize with.