Enhancing unmanned ground vehicle performance in SAR operations: integrated gesture-control and deep learning framework for optimised victim detection.

IF 2.9 Q2 ROBOTICS
Frontiers in Robotics and AI Pub Date : 2024-06-18 eCollection Date: 2024-01-01 DOI:10.3389/frobt.2024.1356345
Muhammad Hamza Zafar, Syed Kumayl Raza Moosavi, Filippo Sanfilippo
{"title":"Enhancing unmanned ground vehicle performance in SAR operations: integrated gesture-control and deep learning framework for optimised victim detection.","authors":"Muhammad Hamza Zafar, Syed Kumayl Raza Moosavi, Filippo Sanfilippo","doi":"10.3389/frobt.2024.1356345","DOIUrl":null,"url":null,"abstract":"<p><p>In this study, we address the critical need for enhanced situational awareness and victim detection capabilities in Search and Rescue (SAR) operations amidst disasters. Traditional unmanned ground vehicles (UGVs) often struggle in such chaotic environments due to their limited manoeuvrability and the challenge of distinguishing victims from debris. Recognising these gaps, our research introduces a novel technological framework that integrates advanced gesture-recognition with cutting-edge deep learning for camera-based victim identification, specifically designed to empower UGVs in disaster scenarios. At the core of our methodology is the development and implementation of the Meerkat Optimization Algorithm-Stacked Convolutional Neural Network-Bi-Long Short Term Memory-Gated Recurrent Unit (MOA-SConv-Bi-LSTM-GRU) model, which sets a new benchmark for hand gesture detection with its remarkable performance metrics: accuracy, precision, recall, and F1-score all approximately 0.9866. This model enables intuitive, real-time control of UGVs through hand gestures, allowing for precise navigation in confined and obstacle-ridden spaces, which is vital for effective SAR operations. Furthermore, we leverage the capabilities of the latest YOLOv8 deep learning model, trained on specialised datasets to accurately detect human victims under a wide range of challenging conditions, such as varying occlusions, lighting, and perspectives. Our comprehensive testing in simulated emergency scenarios validates the effectiveness of our integrated approach. The system demonstrated exceptional proficiency in navigating through obstructions and rapidly locating victims, even in environments with visual impairments like smoke, clutter, and poor lighting. Our study not only highlights the critical gaps in current SAR response capabilities but also offers a pioneering solution through a synergistic blend of gesture-based control, deep learning, and purpose-built robotics. The key findings underscore the potential of our integrated technological framework to significantly enhance UGV performance in disaster scenarios, thereby optimising life-saving outcomes when time is of the essence. This research paves the way for future advancements in SAR technology, with the promise of more efficient and reliable rescue operations in the face of disaster.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":null,"pages":null},"PeriodicalIF":2.9000,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11217714/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Robotics and AI","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frobt.2024.1356345","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0

Abstract

In this study, we address the critical need for enhanced situational awareness and victim detection capabilities in Search and Rescue (SAR) operations amidst disasters. Traditional unmanned ground vehicles (UGVs) often struggle in such chaotic environments due to their limited manoeuvrability and the challenge of distinguishing victims from debris. Recognising these gaps, our research introduces a novel technological framework that integrates advanced gesture-recognition with cutting-edge deep learning for camera-based victim identification, specifically designed to empower UGVs in disaster scenarios. At the core of our methodology is the development and implementation of the Meerkat Optimization Algorithm-Stacked Convolutional Neural Network-Bi-Long Short Term Memory-Gated Recurrent Unit (MOA-SConv-Bi-LSTM-GRU) model, which sets a new benchmark for hand gesture detection with its remarkable performance metrics: accuracy, precision, recall, and F1-score all approximately 0.9866. This model enables intuitive, real-time control of UGVs through hand gestures, allowing for precise navigation in confined and obstacle-ridden spaces, which is vital for effective SAR operations. Furthermore, we leverage the capabilities of the latest YOLOv8 deep learning model, trained on specialised datasets to accurately detect human victims under a wide range of challenging conditions, such as varying occlusions, lighting, and perspectives. Our comprehensive testing in simulated emergency scenarios validates the effectiveness of our integrated approach. The system demonstrated exceptional proficiency in navigating through obstructions and rapidly locating victims, even in environments with visual impairments like smoke, clutter, and poor lighting. Our study not only highlights the critical gaps in current SAR response capabilities but also offers a pioneering solution through a synergistic blend of gesture-based control, deep learning, and purpose-built robotics. The key findings underscore the potential of our integrated technological framework to significantly enhance UGV performance in disaster scenarios, thereby optimising life-saving outcomes when time is of the essence. This research paves the way for future advancements in SAR technology, with the promise of more efficient and reliable rescue operations in the face of disaster.

提高无人地面飞行器在搜救行动中的性能:优化受害者探测的手势控制和深度学习综合框架。
在本研究中,我们探讨了在灾难中开展搜救(SAR)行动时对增强态势感知和受害者探测能力的迫切需求。传统的无人地面运载工具(UGV)由于其有限的机动性和将受害者与碎片区分开来的挑战,往往在这种混乱的环境中举步维艰。认识到这些差距后,我们的研究引入了一个新颖的技术框架,该框架将先进的手势识别与基于摄像头的受害者识别的尖端深度学习相结合,专门设计用于在灾难场景中增强 UGV 的能力。我们方法论的核心是开发和实施 "猫鼬优化算法-堆积卷积神经网络-双长短期记忆门控递归单元(MOA-SConv-Bi-LSTM-GRU)"模型,该模型以其卓越的性能指标为手势检测树立了新的标杆:准确率、精确度、召回率和 F1 分数均约为 0.9866。该模型可通过手势对 UGV 进行直观、实时的控制,从而在狭窄和障碍物密集的空间内实现精确导航,这对有效的搜救行动至关重要。此外,我们还利用了最新的 YOLOv8 深度学习模型的功能,该模型是在专门的数据集上训练出来的,能够在各种具有挑战性的条件下(如不同的遮挡物、光线和视角)准确地检测到人类受害者。我们在模拟应急场景中进行的全面测试验证了我们综合方法的有效性。即使在烟雾、杂乱和光线不足等视觉障碍环境中,该系统也能非常熟练地穿过障碍物并快速定位受害者。我们的研究不仅凸显了当前搜救响应能力的关键差距,还通过基于手势的控制、深度学习和专用机器人技术的协同融合,提供了一种开创性的解决方案。主要研究结果强调了我们的集成技术框架在灾难场景中显著提高无人潜航器性能的潜力,从而在时间紧迫的情况下优化救生效果。这项研究为未来搜救技术的进步铺平了道路,有望在灾难面前实现更高效、更可靠的救援行动。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.50
自引率
5.90%
发文量
355
审稿时长
14 weeks
期刊介绍: Frontiers in Robotics and AI publishes rigorously peer-reviewed research covering all theory and applications of robotics, technology, and artificial intelligence, from biomedical to space robotics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信