{"title":"使用YOLOv9s增强低能见度雾霾条件下的目标检测。","authors":"Yang Zhang, Bin Zhou, Xue Zhao, Xiaomeng Song","doi":"10.1371/journal.pone.0317852","DOIUrl":null,"url":null,"abstract":"<p><p>Low-visibility haze environments, marked by their inherent low contrast and high brightness, present a formidable challenge to the precision and robustness of conventional object detection algorithms. This paper introduces an enhanced object detection framework for YOLOv9s tailored for low-visibility haze conditions, capitalizing on the merits of contrastive learning for optimizing local feature details, as well as the benefits of multiscale attention mechanisms and dynamic focusing mechanisms for achieving real-time global quality optimization. Specifically, the framework incorporates Patchwise Contrastive Learning to fortify the correlation among positive samples within image patches, effectively reducing negative sample interference and enhancing the model's capability to discern subtle local features of haze-impacted images. Additionally, the integration of Efficient Multi-Scale Attention and the Wise-IoU Dynamic Focusing Mechanism enhances the algorithm's sensitivity to channel, spatial orientation, and locational information. Furthermore, the implementation of a nonmonotonic strategy for dynamically adjusting the loss function weights significantly boosts the model's detection precision and training efficiency. Comprehensive experimental evaluations of the COCO2017 fog-augmented dataset indicate that the proposed algorithm surpasses current state-of-the-art techniques in various assessment metrics, including precision, recall, and mean average precision (mAP). Our source code is available at: https://github.com/PaTinLei/EOD.</p>","PeriodicalId":20189,"journal":{"name":"PLoS ONE","volume":"20 2","pages":"e0317852"},"PeriodicalIF":2.6000,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11849987/pdf/","citationCount":"0","resultStr":"{\"title\":\"Enhanced object detection in low-visibility haze conditions with YOLOv9s.\",\"authors\":\"Yang Zhang, Bin Zhou, Xue Zhao, Xiaomeng Song\",\"doi\":\"10.1371/journal.pone.0317852\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Low-visibility haze environments, marked by their inherent low contrast and high brightness, present a formidable challenge to the precision and robustness of conventional object detection algorithms. This paper introduces an enhanced object detection framework for YOLOv9s tailored for low-visibility haze conditions, capitalizing on the merits of contrastive learning for optimizing local feature details, as well as the benefits of multiscale attention mechanisms and dynamic focusing mechanisms for achieving real-time global quality optimization. Specifically, the framework incorporates Patchwise Contrastive Learning to fortify the correlation among positive samples within image patches, effectively reducing negative sample interference and enhancing the model's capability to discern subtle local features of haze-impacted images. Additionally, the integration of Efficient Multi-Scale Attention and the Wise-IoU Dynamic Focusing Mechanism enhances the algorithm's sensitivity to channel, spatial orientation, and locational information. Furthermore, the implementation of a nonmonotonic strategy for dynamically adjusting the loss function weights significantly boosts the model's detection precision and training efficiency. Comprehensive experimental evaluations of the COCO2017 fog-augmented dataset indicate that the proposed algorithm surpasses current state-of-the-art techniques in various assessment metrics, including precision, recall, and mean average precision (mAP). Our source code is available at: https://github.com/PaTinLei/EOD.</p>\",\"PeriodicalId\":20189,\"journal\":{\"name\":\"PLoS ONE\",\"volume\":\"20 2\",\"pages\":\"e0317852\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2025-02-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11849987/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"PLoS ONE\",\"FirstCategoryId\":\"103\",\"ListUrlMain\":\"https://doi.org/10.1371/journal.pone.0317852\",\"RegionNum\":3,\"RegionCategory\":\"综合性期刊\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"MULTIDISCIPLINARY SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"PLoS ONE","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.1371/journal.pone.0317852","RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
Enhanced object detection in low-visibility haze conditions with YOLOv9s.
Low-visibility haze environments, marked by their inherent low contrast and high brightness, present a formidable challenge to the precision and robustness of conventional object detection algorithms. This paper introduces an enhanced object detection framework for YOLOv9s tailored for low-visibility haze conditions, capitalizing on the merits of contrastive learning for optimizing local feature details, as well as the benefits of multiscale attention mechanisms and dynamic focusing mechanisms for achieving real-time global quality optimization. Specifically, the framework incorporates Patchwise Contrastive Learning to fortify the correlation among positive samples within image patches, effectively reducing negative sample interference and enhancing the model's capability to discern subtle local features of haze-impacted images. Additionally, the integration of Efficient Multi-Scale Attention and the Wise-IoU Dynamic Focusing Mechanism enhances the algorithm's sensitivity to channel, spatial orientation, and locational information. Furthermore, the implementation of a nonmonotonic strategy for dynamically adjusting the loss function weights significantly boosts the model's detection precision and training efficiency. Comprehensive experimental evaluations of the COCO2017 fog-augmented dataset indicate that the proposed algorithm surpasses current state-of-the-art techniques in various assessment metrics, including precision, recall, and mean average precision (mAP). Our source code is available at: https://github.com/PaTinLei/EOD.
期刊介绍:
PLOS ONE is an international, peer-reviewed, open-access, online publication. PLOS ONE welcomes reports on primary research from any scientific discipline. It provides:
* Open-access—freely accessible online, authors retain copyright
* Fast publication times
* Peer review by expert, practicing researchers
* Post-publication tools to indicate quality and impact
* Community-based dialogue on articles
* Worldwide media coverage