IPCRGC-YOLOv7: face mask detection algorithm based on improved partial convolution and recursive gated convolution

IF 2.9 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Huaping Zhou, Anpei Dang, Kelei Sun
{"title":"IPCRGC-YOLOv7: face mask detection algorithm based on improved partial convolution and recursive gated convolution","authors":"Huaping Zhou, Anpei Dang, Kelei Sun","doi":"10.1007/s11554-024-01448-2","DOIUrl":null,"url":null,"abstract":"<p>In complex scenarios, current detection algorithms often face challenges such as misdetection and omission when identifying irregularities in pedestrian mask wearing. This paper introduces an enhanced detection method called IPCRGC-YOLOv7 (Improved Partial Convolution Recursive Gate Convolution-YOLOv7) as a solution. Firstly, we integrate the Partial Convolution structure into the backbone network to effectively reduce the number of model parameters. To address the problem of vanishing training gradients, we utilize the residual connection structure derived from the RepVGG network. Additionally, we introduce an efficient aggregation module, PRE-ELAN (Partially Representative Efficiency-ELAN), to replace the original Efficient Long-Range Attention Network (ELAN) structure. Next, we improve the Cross Stage Partial Network (CSPNet) module by incorporating recursive gated convolution. Introducing a new module called CSPNRGC (Cross Stage Partial Network Recursive Gated Convolution), we replace the ELAN structure in the Neck part. This enhancement allows us to achieve higher order spatial interactions across different network hierarchies. Lastly, in the loss function component, we replace the original cross-entropy loss function with Efficient-IoU to enhance loss calculation accuracy. To address the challenge of balancing the contributions of high-quality and low-quality sample weights in the loss, we propose a new loss function called Wise-EIoU (Wise-Efficient IoU). The experimental results show that the IPCRGC-YOLOv7 algorithm improves accuracy by 4.71%, recall by 5.94%, mean Average Precision (mAP@0.5) by 2.9%, and mAP@.5:.95 by 2.7% when compared to the original YOLOv7 algorithm, which can meet the requirements for mask wearing detection accuracy in practical application scenarios.</p>","PeriodicalId":51224,"journal":{"name":"Journal of Real-Time Image Processing","volume":"128 1","pages":""},"PeriodicalIF":2.9000,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Real-Time Image Processing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11554-024-01448-2","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In complex scenarios, current detection algorithms often face challenges such as misdetection and omission when identifying irregularities in pedestrian mask wearing. This paper introduces an enhanced detection method called IPCRGC-YOLOv7 (Improved Partial Convolution Recursive Gate Convolution-YOLOv7) as a solution. Firstly, we integrate the Partial Convolution structure into the backbone network to effectively reduce the number of model parameters. To address the problem of vanishing training gradients, we utilize the residual connection structure derived from the RepVGG network. Additionally, we introduce an efficient aggregation module, PRE-ELAN (Partially Representative Efficiency-ELAN), to replace the original Efficient Long-Range Attention Network (ELAN) structure. Next, we improve the Cross Stage Partial Network (CSPNet) module by incorporating recursive gated convolution. Introducing a new module called CSPNRGC (Cross Stage Partial Network Recursive Gated Convolution), we replace the ELAN structure in the Neck part. This enhancement allows us to achieve higher order spatial interactions across different network hierarchies. Lastly, in the loss function component, we replace the original cross-entropy loss function with Efficient-IoU to enhance loss calculation accuracy. To address the challenge of balancing the contributions of high-quality and low-quality sample weights in the loss, we propose a new loss function called Wise-EIoU (Wise-Efficient IoU). The experimental results show that the IPCRGC-YOLOv7 algorithm improves accuracy by 4.71%, recall by 5.94%, mean Average Precision (mAP@0.5) by 2.9%, and mAP@.5:.95 by 2.7% when compared to the original YOLOv7 algorithm, which can meet the requirements for mask wearing detection accuracy in practical application scenarios.

Abstract Image

IPCRGC-YOLOv7:基于改进的部分卷积和递归门控卷积的人脸面具检测算法
在复杂场景中,当前的检测算法在识别行人面具佩戴的不规则性时经常面临误检和漏检等挑战。本文提出了一种名为 IPCRGC-YOLOv7(改进的部分卷积递归门卷积-YOLOv7)的增强型检测方法作为解决方案。首先,我们将部分卷积结构集成到主干网络中,有效减少了模型参数的数量。为了解决训练梯度消失的问题,我们利用了 RepVGG 网络衍生的残差连接结构。此外,我们还引入了高效聚合模块 PRE-ELAN(Partially Representative Efficiency-ELAN),以取代原有的高效长距离注意力网络(ELAN)结构。接下来,我们改进了跨阶段部分网络(CSPNet)模块,加入了递归门控卷积。我们引入了一个名为 CSPNRGC(跨阶段部分网络递归门控卷积)的新模块,取代了颈部的 ELAN 结构。这一改进使我们能够在不同的网络层次中实现更高阶的空间交互。最后,在损失函数部分,我们用 Efficient-IoU 取代了原来的交叉熵损失函数,以提高损失计算的准确性。为了解决平衡高质量和低质量样本权重在损失中的贡献这一难题,我们提出了一种新的损失函数,称为 Wise-EIoU(Wise-Efficient IoU)。实验结果表明,与原始 YOLOv7 算法相比,IPCRGC-YOLOv7 算法的准确率提高了 4.71%,召回率提高了 5.94%,平均精度(mAP@0.5)提高了 2.9%,mAP@.5:.95 提高了 2.7%,可以满足实际应用场景中对口罩佩戴检测准确率的要求。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Real-Time Image Processing
Journal of Real-Time Image Processing COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-ENGINEERING, ELECTRICAL & ELECTRONIC
CiteScore
6.80
自引率
6.70%
发文量
68
审稿时长
6 months
期刊介绍: Due to rapid advancements in integrated circuit technology, the rich theoretical results that have been developed by the image and video processing research community are now being increasingly applied in practical systems to solve real-world image and video processing problems. Such systems involve constraints placed not only on their size, cost, and power consumption, but also on the timeliness of the image data processed. Examples of such systems are mobile phones, digital still/video/cell-phone cameras, portable media players, personal digital assistants, high-definition television, video surveillance systems, industrial visual inspection systems, medical imaging devices, vision-guided autonomous robots, spectral imaging systems, and many other real-time embedded systems. In these real-time systems, strict timing requirements demand that results are available within a certain interval of time as imposed by the application. It is often the case that an image processing algorithm is developed and proven theoretically sound, presumably with a specific application in mind, but its practical applications and the detailed steps, methodology, and trade-off analysis required to achieve its real-time performance are not fully explored, leaving these critical and usually non-trivial issues for those wishing to employ the algorithm in a real-time system. The Journal of Real-Time Image Processing is intended to bridge the gap between the theory and practice of image processing, serving the greater community of researchers, practicing engineers, and industrial professionals who deal with designing, implementing or utilizing image processing systems which must satisfy real-time design constraints.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信