Jianan Wang, Changzhong Wang, Weiping Ding, Cheng Li
{"title":"YOlOv5s-ACE: Forest Fire Object Detection Algorithm Based on Improved YOLOv5s","authors":"Jianan Wang, Changzhong Wang, Weiping Ding, Cheng Li","doi":"10.1007/s10694-024-01619-4","DOIUrl":null,"url":null,"abstract":"<div><p>To address the challenges of low detection accuracy, slow detection speed, coarse feature extraction, and the difficulty of detection deployment in complex forest fire backgrounds, this paper presents a forest fire object detection algorithm based on an improved YOLOv5s (YOLOv5s-ACE). The algorithm not only realizes the accurate identification of small objects, but also guarantees the accuracy and speed of detection. Firstly, YOLOv5s-ACE uses Copy-Pasting data enhancement to expand the small object sample set to reduce the overfitting risk in the process of model training. Secondly, it choose Atrous Spatial Pyramid Pooling (ASPP) to replace Spatial Pyramid Pooling (SPP) module in backbone part of YOLOv5 network. Therefore, the proposed algorithm can enlarge the receptive field while ensuring the resolution, which is conducive to the accurate positioning of small object forest flame. Third, after adding the Convolutional Block Attention Module (CBAM) module to the C3 module of the Neck layer, the key features of the forest flame object can be further screened, while irrelevant information that interferes with the flame detection, such as background information, can be eliminated. The network performance of forest fire detection is improved without increasing the depth, width and resolution of the input image. Finally, we replace CIOU losses (Complete-IoU) with EIOU losses (Efficient-IoU) to optimize the performance of the model and improve accuracy. The experimental results show that compared with the original algorithm, the proposed object detection algorithm improves mean Average Precision (mAP) by 5.6%, Precision by 2.7%, Recall by 6.5% and GFlops by 6.7%. Even compared with the YOLOv7 algorithm, the proposed algorithm YOLOv5s-ACE increases mAP by 0.9%, Precision by 2.2%, and Recall by 0.3%.</p></div>","PeriodicalId":558,"journal":{"name":"Fire Technology","volume":"60 6","pages":"4023 - 4043"},"PeriodicalIF":2.3000,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Fire Technology","FirstCategoryId":"5","ListUrlMain":"https://link.springer.com/article/10.1007/s10694-024-01619-4","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
To address the challenges of low detection accuracy, slow detection speed, coarse feature extraction, and the difficulty of detection deployment in complex forest fire backgrounds, this paper presents a forest fire object detection algorithm based on an improved YOLOv5s (YOLOv5s-ACE). The algorithm not only realizes the accurate identification of small objects, but also guarantees the accuracy and speed of detection. Firstly, YOLOv5s-ACE uses Copy-Pasting data enhancement to expand the small object sample set to reduce the overfitting risk in the process of model training. Secondly, it choose Atrous Spatial Pyramid Pooling (ASPP) to replace Spatial Pyramid Pooling (SPP) module in backbone part of YOLOv5 network. Therefore, the proposed algorithm can enlarge the receptive field while ensuring the resolution, which is conducive to the accurate positioning of small object forest flame. Third, after adding the Convolutional Block Attention Module (CBAM) module to the C3 module of the Neck layer, the key features of the forest flame object can be further screened, while irrelevant information that interferes with the flame detection, such as background information, can be eliminated. The network performance of forest fire detection is improved without increasing the depth, width and resolution of the input image. Finally, we replace CIOU losses (Complete-IoU) with EIOU losses (Efficient-IoU) to optimize the performance of the model and improve accuracy. The experimental results show that compared with the original algorithm, the proposed object detection algorithm improves mean Average Precision (mAP) by 5.6%, Precision by 2.7%, Recall by 6.5% and GFlops by 6.7%. Even compared with the YOLOv7 algorithm, the proposed algorithm YOLOv5s-ACE increases mAP by 0.9%, Precision by 2.2%, and Recall by 0.3%.
期刊介绍:
Fire Technology publishes original contributions, both theoretical and empirical, that contribute to the solution of problems in fire safety science and engineering. It is the leading journal in the field, publishing applied research dealing with the full range of actual and potential fire hazards facing humans and the environment. It covers the entire domain of fire safety science and engineering problems relevant in industrial, operational, cultural, and environmental applications, including modeling, testing, detection, suppression, human behavior, wildfires, structures, and risk analysis.
The aim of Fire Technology is to push forward the frontiers of knowledge and technology by encouraging interdisciplinary communication of significant technical developments in fire protection and subjects of scientific interest to the fire protection community at large.
It is published in conjunction with the National Fire Protection Association (NFPA) and the Society of Fire Protection Engineers (SFPE). The mission of NFPA is to help save lives and reduce loss with information, knowledge, and passion. The mission of SFPE is advancing the science and practice of fire protection engineering internationally.