Automated Gesture Recognition Using African Vulture Optimization with Deep Learning for Visually Impaired People on Sensory Modality Data

IF 1.7 Q2 REHABILITATION
M. Maashi, M. Al-Hagery, Mohammed Rizwanullah, A. Osman
{"title":"Automated Gesture Recognition Using African Vulture Optimization with Deep Learning for Visually Impaired People on Sensory Modality Data","authors":"M. Maashi, M. Al-Hagery, Mohammed Rizwanullah, A. Osman","doi":"10.57197/jdr-2023-0019","DOIUrl":null,"url":null,"abstract":"Gesture recognition for visually impaired persons (VIPs) is a useful technology for enhancing their communications and increasing accessibility. It is vital to understand the specific needs and challenges faced by VIPs when planning a gesture recognition model. But, typical gesture recognition methods frequently depend on the visual input (for instance, cameras); it can be vital to discover other sensory modalities for input. The deep learning (DL)-based gesture recognition method is effective for the interaction of VIPs with their devices. It offers a further intuitive and natural way of relating with technology, creating it more available for everybody. Therefore, this study presents an African Vulture Optimization with Deep Learning-based Gesture Recognition for Visually Impaired People on Sensory Modality Data (AVODL-GRSMD) technique. The AVODL-GRSMD technique mainly focuses on the utilization of the DL model with hyperparameter tuning strategy for a productive and accurate gesture detection and classification process. The AVODL-GRSMD technique utilizes the primary data preprocessing stage to normalize the input sensor data. The AVODL-GRSMD technique uses a multi-head attention-based bidirectional gated recurrent unit (MHA-BGRU) method for accurate gesture recognition. Finally, the hyperparameter optimization of the MHA-BGRU method can be performed by the use of African Vulture Optimization with Deep Learning (AVO) approach. A series of simulation analyses were performed to demonstrate the superior performance of the AVODL-GRSMD technique. The experimental values demonstrate the better recognition rate of the AVODL-GRSMD technique compared to that of the state-of-the-art models.","PeriodicalId":46073,"journal":{"name":"Scandinavian Journal of Disability Research","volume":"4 1","pages":""},"PeriodicalIF":1.7000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Scandinavian Journal of Disability Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.57197/jdr-2023-0019","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"REHABILITATION","Score":null,"Total":0}
引用次数: 0

Abstract

Gesture recognition for visually impaired persons (VIPs) is a useful technology for enhancing their communications and increasing accessibility. It is vital to understand the specific needs and challenges faced by VIPs when planning a gesture recognition model. But, typical gesture recognition methods frequently depend on the visual input (for instance, cameras); it can be vital to discover other sensory modalities for input. The deep learning (DL)-based gesture recognition method is effective for the interaction of VIPs with their devices. It offers a further intuitive and natural way of relating with technology, creating it more available for everybody. Therefore, this study presents an African Vulture Optimization with Deep Learning-based Gesture Recognition for Visually Impaired People on Sensory Modality Data (AVODL-GRSMD) technique. The AVODL-GRSMD technique mainly focuses on the utilization of the DL model with hyperparameter tuning strategy for a productive and accurate gesture detection and classification process. The AVODL-GRSMD technique utilizes the primary data preprocessing stage to normalize the input sensor data. The AVODL-GRSMD technique uses a multi-head attention-based bidirectional gated recurrent unit (MHA-BGRU) method for accurate gesture recognition. Finally, the hyperparameter optimization of the MHA-BGRU method can be performed by the use of African Vulture Optimization with Deep Learning (AVO) approach. A series of simulation analyses were performed to demonstrate the superior performance of the AVODL-GRSMD technique. The experimental values demonstrate the better recognition rate of the AVODL-GRSMD technique compared to that of the state-of-the-art models.
基于感官模态数据的视障人士使用非洲秃鹫优化和深度学习的自动手势识别
视障人士手势识别技术是一项有效的技术,可以帮助视障人士加强沟通,提高无障碍程度。在规划手势识别模型时,了解vip的具体需求和面临的挑战至关重要。但是,典型的手势识别方法通常依赖于视觉输入(例如,摄像头);发现其他的感官输入方式是至关重要的。基于深度学习(DL)的手势识别方法对于贵宾与他们的设备的交互是有效的。它提供了一种更直观和自然的与技术联系的方式,使它更容易为每个人所用。因此,本研究提出了一种基于感官模态数据(AVODL-GRSMD)技术的基于深度学习的非洲秃鹫手势识别方法。AVODL-GRSMD技术主要是利用具有超参数调谐策略的深度学习模型来实现高效、准确的手势检测和分类过程。AVODL-GRSMD技术利用初级数据预处理阶段对输入传感器数据进行归一化处理。AVODL-GRSMD技术采用基于多头注意的双向门控循环单元(MHA-BGRU)方法进行精确的手势识别。最后,利用非洲秃鹫优化与深度学习(AVO)方法对MHA-BGRU方法进行超参数优化。通过一系列的仿真分析,验证了AVODL-GRSMD技术的优越性能。实验结果表明,与现有模型相比,AVODL-GRSMD技术具有更好的识别率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
3.20
自引率
0.00%
发文量
13
审稿时长
16 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信