Enhanced object detection in low-visibility haze conditions with YOLOv9s.

IF 2.6 3区 综合性期刊 Q1 MULTIDISCIPLINARY SCIENCES
PLoS ONE Pub Date : 2025-02-24 eCollection Date: 2025-01-01 DOI:10.1371/journal.pone.0317852
Yang Zhang, Bin Zhou, Xue Zhao, Xiaomeng Song
{"title":"Enhanced object detection in low-visibility haze conditions with YOLOv9s.","authors":"Yang Zhang, Bin Zhou, Xue Zhao, Xiaomeng Song","doi":"10.1371/journal.pone.0317852","DOIUrl":null,"url":null,"abstract":"<p><p>Low-visibility haze environments, marked by their inherent low contrast and high brightness, present a formidable challenge to the precision and robustness of conventional object detection algorithms. This paper introduces an enhanced object detection framework for YOLOv9s tailored for low-visibility haze conditions, capitalizing on the merits of contrastive learning for optimizing local feature details, as well as the benefits of multiscale attention mechanisms and dynamic focusing mechanisms for achieving real-time global quality optimization. Specifically, the framework incorporates Patchwise Contrastive Learning to fortify the correlation among positive samples within image patches, effectively reducing negative sample interference and enhancing the model's capability to discern subtle local features of haze-impacted images. Additionally, the integration of Efficient Multi-Scale Attention and the Wise-IoU Dynamic Focusing Mechanism enhances the algorithm's sensitivity to channel, spatial orientation, and locational information. Furthermore, the implementation of a nonmonotonic strategy for dynamically adjusting the loss function weights significantly boosts the model's detection precision and training efficiency. Comprehensive experimental evaluations of the COCO2017 fog-augmented dataset indicate that the proposed algorithm surpasses current state-of-the-art techniques in various assessment metrics, including precision, recall, and mean average precision (mAP). Our source code is available at: https://github.com/PaTinLei/EOD.</p>","PeriodicalId":20189,"journal":{"name":"PLoS ONE","volume":"20 2","pages":"e0317852"},"PeriodicalIF":2.6000,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11849987/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PLoS ONE","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.1371/journal.pone.0317852","RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Low-visibility haze environments, marked by their inherent low contrast and high brightness, present a formidable challenge to the precision and robustness of conventional object detection algorithms. This paper introduces an enhanced object detection framework for YOLOv9s tailored for low-visibility haze conditions, capitalizing on the merits of contrastive learning for optimizing local feature details, as well as the benefits of multiscale attention mechanisms and dynamic focusing mechanisms for achieving real-time global quality optimization. Specifically, the framework incorporates Patchwise Contrastive Learning to fortify the correlation among positive samples within image patches, effectively reducing negative sample interference and enhancing the model's capability to discern subtle local features of haze-impacted images. Additionally, the integration of Efficient Multi-Scale Attention and the Wise-IoU Dynamic Focusing Mechanism enhances the algorithm's sensitivity to channel, spatial orientation, and locational information. Furthermore, the implementation of a nonmonotonic strategy for dynamically adjusting the loss function weights significantly boosts the model's detection precision and training efficiency. Comprehensive experimental evaluations of the COCO2017 fog-augmented dataset indicate that the proposed algorithm surpasses current state-of-the-art techniques in various assessment metrics, including precision, recall, and mean average precision (mAP). Our source code is available at: https://github.com/PaTinLei/EOD.

使用YOLOv9s增强低能见度雾霾条件下的目标检测。
低能见度雾霾环境以其固有的低对比度和高亮度为特征,对传统目标检测算法的精度和鲁棒性提出了巨大挑战。本文介绍了一种针对低能见度雾霾条件的增强YOLOv9s目标检测框架,利用对比学习优化局部特征细节的优点,以及多尺度注意机制和动态聚焦机制实现实时全局质量优化的优点。具体而言,该框架结合了Patchwise contrast Learning来强化图像patch内正样本之间的相关性,有效减少负样本干扰,增强模型识别雾霾影响图像的细微局部特征的能力。此外,高效多尺度注意力和Wise-IoU动态聚焦机制的集成提高了算法对通道、空间方向和位置信息的敏感性。此外,采用非单调策略动态调整损失函数权值,显著提高了模型的检测精度和训练效率。COCO2017雾增强数据集的综合实验评估表明,该算法在精度、召回率和平均精度(mAP)等各种评估指标上都优于当前最先进的技术。我们的源代码可从https://github.com/PaTinLei/EOD获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
PLoS ONE
PLoS ONE 生物-生物学
CiteScore
6.20
自引率
5.40%
发文量
14242
审稿时长
3.7 months
期刊介绍: PLOS ONE is an international, peer-reviewed, open-access, online publication. PLOS ONE welcomes reports on primary research from any scientific discipline. It provides: * Open-access—freely accessible online, authors retain copyright * Fast publication times * Peer review by expert, practicing researchers * Post-publication tools to indicate quality and impact * Community-based dialogue on articles * Worldwide media coverage
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信