Basel A. Dabwan, M. Jadhav, Yahya A. Ali, Fekry Olayah
{"title":"基于EfficientnetB1和迁移学习技术的阿拉伯手语识别","authors":"Basel A. Dabwan, M. Jadhav, Yahya A. Ali, Fekry Olayah","doi":"10.1109/ITIKD56332.2023.10099710","DOIUrl":null,"url":null,"abstract":"Deaf and mute people rely on signing as a means of communication with others and with themselves. The value of sign language, which is the sole way for the deaf and mute communities to communicate, is often overlooked by regular people. Because of these limitations or impairments, these people are experiencing considerable setbacks in their life, including joblessness, serious depression, and a variety of other symptoms. Sign language interpreters are one of the communication services they use. However, paying these interpreters is prohibitively expensive, necessitating a low-cost solution to the problem. As a result, a system has been built that would interpret the Arabic Sign Language-based visual hand dataset into written information. The dataset used for this model Arabic Alphabets Sign Language dataset consists of 32 classes, each category has 506 images, resulting in a total of 506 * 32 = 16192 images. On the provided dataset, the tests have been run using a variety of pre-trained models. Most of them carried out their duties rather normally, and in the end, we constructed the Convolutional Neural Networks model with EfficientnetB1 scaling loaded with weights pre-trained on the ImageNet model, Based on this observation. Using a simple but highly effective compound coefficient, it equally scales all width/depth/resolution dimensions. Based on the model's results, we demonstrated the efficacy of this strategy. The percentage of accuracy that we obtained from this model is 99% accuracy and 97.9% validation accuracy","PeriodicalId":283631,"journal":{"name":"2023 International Conference on IT Innovation and Knowledge Discovery (ITIKD)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Arabic Sign Language Recognition Using EfficientnetB1 and Transfer Learning Technique\",\"authors\":\"Basel A. Dabwan, M. Jadhav, Yahya A. Ali, Fekry Olayah\",\"doi\":\"10.1109/ITIKD56332.2023.10099710\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deaf and mute people rely on signing as a means of communication with others and with themselves. The value of sign language, which is the sole way for the deaf and mute communities to communicate, is often overlooked by regular people. Because of these limitations or impairments, these people are experiencing considerable setbacks in their life, including joblessness, serious depression, and a variety of other symptoms. Sign language interpreters are one of the communication services they use. However, paying these interpreters is prohibitively expensive, necessitating a low-cost solution to the problem. As a result, a system has been built that would interpret the Arabic Sign Language-based visual hand dataset into written information. The dataset used for this model Arabic Alphabets Sign Language dataset consists of 32 classes, each category has 506 images, resulting in a total of 506 * 32 = 16192 images. On the provided dataset, the tests have been run using a variety of pre-trained models. Most of them carried out their duties rather normally, and in the end, we constructed the Convolutional Neural Networks model with EfficientnetB1 scaling loaded with weights pre-trained on the ImageNet model, Based on this observation. Using a simple but highly effective compound coefficient, it equally scales all width/depth/resolution dimensions. Based on the model's results, we demonstrated the efficacy of this strategy. The percentage of accuracy that we obtained from this model is 99% accuracy and 97.9% validation accuracy\",\"PeriodicalId\":283631,\"journal\":{\"name\":\"2023 International Conference on IT Innovation and Knowledge Discovery (ITIKD)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 International Conference on IT Innovation and Knowledge Discovery (ITIKD)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ITIKD56332.2023.10099710\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on IT Innovation and Knowledge Discovery (ITIKD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITIKD56332.2023.10099710","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Arabic Sign Language Recognition Using EfficientnetB1 and Transfer Learning Technique
Deaf and mute people rely on signing as a means of communication with others and with themselves. The value of sign language, which is the sole way for the deaf and mute communities to communicate, is often overlooked by regular people. Because of these limitations or impairments, these people are experiencing considerable setbacks in their life, including joblessness, serious depression, and a variety of other symptoms. Sign language interpreters are one of the communication services they use. However, paying these interpreters is prohibitively expensive, necessitating a low-cost solution to the problem. As a result, a system has been built that would interpret the Arabic Sign Language-based visual hand dataset into written information. The dataset used for this model Arabic Alphabets Sign Language dataset consists of 32 classes, each category has 506 images, resulting in a total of 506 * 32 = 16192 images. On the provided dataset, the tests have been run using a variety of pre-trained models. Most of them carried out their duties rather normally, and in the end, we constructed the Convolutional Neural Networks model with EfficientnetB1 scaling loaded with weights pre-trained on the ImageNet model, Based on this observation. Using a simple but highly effective compound coefficient, it equally scales all width/depth/resolution dimensions. Based on the model's results, we demonstrated the efficacy of this strategy. The percentage of accuracy that we obtained from this model is 99% accuracy and 97.9% validation accuracy