YOLO-LHD: an enhanced lightweight approach for helmet wearing detection in industrial environments

IF 2.2 Q2 CONSTRUCTION & BUILDING TECHNOLOGY
Lianhua Hu, Jiaqi Ren
{"title":"YOLO-LHD: an enhanced lightweight approach for helmet wearing detection in industrial environments","authors":"Lianhua Hu, Jiaqi Ren","doi":"10.3389/fbuil.2023.1288445","DOIUrl":null,"url":null,"abstract":"Establishing a lightweight yet high-precision object detection algorithm is paramount for accurately assessing workers’ helmet-wearing status in intricate industrial settings. Helmet detection is inherently challenging due to factors like the diminutive target size, intricate backgrounds, and the need to strike a balance between model compactness and detection accuracy. In this paper, we propose YOLO-LHD (You Only Look Once-Lightweight Helmet Detection), an efficient framework built upon the YOLOv8 object detection model. The proposed approach enhances the model’s ability to detect small targets in complex scenes by incorporating the Coordinate attention mechanism and Focal loss function, which introduce high-resolution features and large-scale detection heads. Additionally, we integrate the improved Ghostv2 module into the backbone feature extraction network to further improve the balance between model accuracy and size. We evaluated our method on MHWD dataset established in this study and compared it with the baseline model YOLOv8n. The proposed YOLO-LHD model achieved a reduction of 66.1% in model size while attaining the best 94.3% mAP50 with only 0.86M parameters. This demonstrates the effectiveness of the proposed approach in achieving lightweight deployment and high-precision helmet detection.","PeriodicalId":37112,"journal":{"name":"Frontiers in Built Environment","volume":" 9","pages":"0"},"PeriodicalIF":2.2000,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Built Environment","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fbuil.2023.1288445","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"CONSTRUCTION & BUILDING TECHNOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Establishing a lightweight yet high-precision object detection algorithm is paramount for accurately assessing workers’ helmet-wearing status in intricate industrial settings. Helmet detection is inherently challenging due to factors like the diminutive target size, intricate backgrounds, and the need to strike a balance between model compactness and detection accuracy. In this paper, we propose YOLO-LHD (You Only Look Once-Lightweight Helmet Detection), an efficient framework built upon the YOLOv8 object detection model. The proposed approach enhances the model’s ability to detect small targets in complex scenes by incorporating the Coordinate attention mechanism and Focal loss function, which introduce high-resolution features and large-scale detection heads. Additionally, we integrate the improved Ghostv2 module into the backbone feature extraction network to further improve the balance between model accuracy and size. We evaluated our method on MHWD dataset established in this study and compared it with the baseline model YOLOv8n. The proposed YOLO-LHD model achieved a reduction of 66.1% in model size while attaining the best 94.3% mAP50 with only 0.86M parameters. This demonstrates the effectiveness of the proposed approach in achieving lightweight deployment and high-precision helmet detection.
YOLO-LHD:用于工业环境中头盔佩戴检测的增强轻量级方法
建立一种轻量级但高精度的目标检测算法对于在复杂的工业环境中准确评估工人的头盔佩戴状态至关重要。由于目标尺寸小、背景复杂以及需要在模型紧凑性和检测精度之间取得平衡等因素,头盔检测本身就具有挑战性。在本文中,我们提出了YOLO-LHD (You Only Look Once-Lightweight Helmet Detection),这是一个建立在YOLOv8目标检测模型之上的高效框架。该方法结合了坐标注意机制和焦点损失函数,引入了高分辨率特征和大尺度检测头,增强了模型对复杂场景下小目标的检测能力。此外,我们将改进的Ghostv2模块集成到骨干特征提取网络中,进一步提高了模型精度和尺寸之间的平衡。我们在本研究建立的MHWD数据集上评估了我们的方法,并将其与基线模型YOLOv8n进行了比较。所提出的YOLO-LHD模型在模型尺寸减小66.1%的同时,仅使用0.86M个参数即可获得最佳的94.3% mAP50。这证明了该方法在实现轻量化部署和高精度头盔检测方面的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Frontiers in Built Environment
Frontiers in Built Environment Social Sciences-Urban Studies
CiteScore
4.80
自引率
6.70%
发文量
266
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信