Hardhat-wearing detection based on YOLOv5 in Internet-of-Things

Wanbo Luo, Rajeswari Raju, K. K. Mohd Shariff, Ahmad Ihsan Yassin
{"title":"Hardhat-wearing detection based on YOLOv5 in Internet-of-Things","authors":"Wanbo Luo, Rajeswari Raju, K. K. Mohd Shariff, Ahmad Ihsan Yassin","doi":"10.32629/jai.v7i2.1255","DOIUrl":null,"url":null,"abstract":"Worker safety is paramount in many industries. An essential component of industrial safety protocols involves the proper use of hardhats. However, due to lax safety awareness, many workers neglect to wear hardhats correctly, leading to frequent on-site accidents in China. Traditional detection methods, such as manual inspection and video surveillance, are inefficient and costly. Real-time monitoring of hardhat use is vital to boost compliance with hardhat usage and decrease accident rates. Recently, the advancement of the Internet of Things (IoT) and edge computing has provided an opportunity to improve these methods. In this study, two detection models based on You Only Look Once (YOLO) v5, hardhat-YOLOv5s and hardhat-YOLOv5n, were designed, validated, and implemented, tailored for hardhat detection. First, a public hardhat dataset was enriched to bolster the detection model’s robustness. Then, hardhat detection models were trained using the YOLOv5s and YOLOv5n, each catering to edge computing terminals with varying performance capacities. Finally, the models were validated using image and video data. The experimental results indicated that both models provided high detection precision and satisfied practical application needs. On the augmented public dataset, the hardhat-YOLOv5s and hardhat-YOLOv5n models have a Mean Average Precision (mAP) of 87.9% and 85.5%, respectively, for all six classes. Compared with the hardhat-YOLOv5s model, Parameters and Giga Floating-point Operations (GFLOPs) of the hardhat-YOLOv5n model decrease by 74.8% and 73.4%, respectively, and Frame per Second (FPS) increases by 30.5% on the validation dataset, which is more suitable for low-cost edge computing terminals with less computational power.","PeriodicalId":307060,"journal":{"name":"Journal of Autonomous Intelligence","volume":"52 38","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Autonomous Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.32629/jai.v7i2.1255","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Worker safety is paramount in many industries. An essential component of industrial safety protocols involves the proper use of hardhats. However, due to lax safety awareness, many workers neglect to wear hardhats correctly, leading to frequent on-site accidents in China. Traditional detection methods, such as manual inspection and video surveillance, are inefficient and costly. Real-time monitoring of hardhat use is vital to boost compliance with hardhat usage and decrease accident rates. Recently, the advancement of the Internet of Things (IoT) and edge computing has provided an opportunity to improve these methods. In this study, two detection models based on You Only Look Once (YOLO) v5, hardhat-YOLOv5s and hardhat-YOLOv5n, were designed, validated, and implemented, tailored for hardhat detection. First, a public hardhat dataset was enriched to bolster the detection model’s robustness. Then, hardhat detection models were trained using the YOLOv5s and YOLOv5n, each catering to edge computing terminals with varying performance capacities. Finally, the models were validated using image and video data. The experimental results indicated that both models provided high detection precision and satisfied practical application needs. On the augmented public dataset, the hardhat-YOLOv5s and hardhat-YOLOv5n models have a Mean Average Precision (mAP) of 87.9% and 85.5%, respectively, for all six classes. Compared with the hardhat-YOLOv5s model, Parameters and Giga Floating-point Operations (GFLOPs) of the hardhat-YOLOv5n model decrease by 74.8% and 73.4%, respectively, and Frame per Second (FPS) increases by 30.5% on the validation dataset, which is more suitable for low-cost edge computing terminals with less computational power.
基于 YOLOv5 的物联网硬帽佩戴检测
在许多行业,工人的安全至关重要。工业安全规程的一个重要组成部分就是正确使用安全帽。然而,由于安全意识淡薄,许多工人忽视了正确佩戴安全帽,导致中国的现场事故频发。传统的检测方法,如人工检查和视频监控,效率低下且成本高昂。实时监控硬头盔的使用情况对于提高硬头盔使用的合规性和降低事故发生率至关重要。最近,物联网(IoT)和边缘计算的发展为改进这些方法提供了机会。在本研究中,我们设计、验证并实施了两种基于 You Only Look Once (YOLO) v5 的检测模型,即 hardhat-YOLOv5s 和 hardhat-YOLOv5n,它们是为硬头盔检测量身定制的。首先,丰富了公共硬帽数据集,以增强检测模型的鲁棒性。然后,使用 YOLOv5s 和 YOLOv5n 对硬帽检测模型进行了训练,每个模型都适用于性能各异的边缘计算终端。最后,利用图像和视频数据对模型进行了验证。实验结果表明,两个模型都能提供较高的检测精度,满足实际应用需求。在增强的公共数据集上,hardhat-YOLOv5s 和 hardhat-YOLOv5n 模型在所有六个类别中的平均精度(mAP)分别为 87.9% 和 85.5%。与 hardhat-YOLOv5s 模型相比,在验证数据集上,hardhat-YOLOv5n 模型的参数和千兆浮点运算次数(GFLOPs)分别减少了 74.8% 和 73.4%,每秒帧数(FPS)增加了 30.5%,更适合计算能力较弱的低成本边缘计算终端。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信