{"title":"A Novel Image-Based Arabic Hand Gestures Recognition Approach Using YOLOv7 and ArSL21L","authors":"Fatma Mazen, Mai Ezz-Eldin","doi":"10.21608/fuje.2023.216182.1050","DOIUrl":null,"url":null,"abstract":"Recognizing and documenting Arabic sign language has recently received a lot of attention because of its ability to enhance communication between deaf persons and normal people. The development of automatic sign language recognition (SLR) systems to allow communication with deaf persons is the primary goal of SLR. Until recently, Arabic SLR (ArSLR) received little attention. Building an automatic Arabic hand gesture recognition system is a challenging task. This work presents a novel image-based ArSL recognition approach where You Only Look Once v7 (YOLOv7) is used to build an accurate ArSL alphabet detector and classifier utilizing ArSL21L: Arabic Sign Language Letter Dataset. The proposed YOLOv7 medium model has achieved the highest mAP0.5 and mAP0.5:0.95 scores of 0.9909 and 0.8306, respectively. It has outper-formed not only YOLOv5m but also YOLOv5l in terms of mAP0.5 and mAP0.5:0.95 scores. Furthermore, regarding mAP0.5 and mAP0.5:0.95 scores, the YOLOv7-tiny model has not only surpassed YOLOv5s but additionally YOLOv5m. YOLOv5s, on the other hand, has the lowest mAP0.5 and mAP0.5:0.95 scores of 0.9408 and 0.7661, respectively.","PeriodicalId":484000,"journal":{"name":"Fayoum University Journal of Engineering","volume":"56 5","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Fayoum University Journal of Engineering","FirstCategoryId":"0","ListUrlMain":"https://doi.org/10.21608/fuje.2023.216182.1050","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recognizing and documenting Arabic sign language has recently received a lot of attention because of its ability to enhance communication between deaf persons and normal people. The development of automatic sign language recognition (SLR) systems to allow communication with deaf persons is the primary goal of SLR. Until recently, Arabic SLR (ArSLR) received little attention. Building an automatic Arabic hand gesture recognition system is a challenging task. This work presents a novel image-based ArSL recognition approach where You Only Look Once v7 (YOLOv7) is used to build an accurate ArSL alphabet detector and classifier utilizing ArSL21L: Arabic Sign Language Letter Dataset. The proposed YOLOv7 medium model has achieved the highest mAP0.5 and mAP0.5:0.95 scores of 0.9909 and 0.8306, respectively. It has outper-formed not only YOLOv5m but also YOLOv5l in terms of mAP0.5 and mAP0.5:0.95 scores. Furthermore, regarding mAP0.5 and mAP0.5:0.95 scores, the YOLOv7-tiny model has not only surpassed YOLOv5s but additionally YOLOv5m. YOLOv5s, on the other hand, has the lowest mAP0.5 and mAP0.5:0.95 scores of 0.9408 and 0.7661, respectively.