{"title":"A Robustness Study on Early Fire Image Recognitions","authors":"Jingwu Wang, Yifeng Tu, Yinuo Huo, Jingxia Ren","doi":"10.3390/fire7070241","DOIUrl":null,"url":null,"abstract":"With the advancement of society and the rapid urbanization process, there is an escalating need for effective fire detection systems. This study endeavors to bolster the efficacy and dependability of fire detection systems in intricate settings by refining the existing You Only Look Once version 5 (YOLOv5) algorithm and introducing algorithms grounded on fire characteristics. Primarily, the Convolutional Block Attention Module (CBAM) attention mechanism is introduced to steer the model towards substantial features, thereby amplifying detection precision. Subsequently, a multi-scale feature fusion network, employing the Adaptive Spatial Feature Fusion Module (ASFF), is embraced to proficiently amalgamate feature information from various scales, thereby enhancing the model’s comprehension of image content and subsequently fortifying detection resilience. Moreover, refining the loss function and integrating a larger detection head further fortify the model’s capability to discern diminutive targets. Experimental findings illustrate that the refined YOLOv5 algorithm attains accuracy advancements of 8% and 8.2% on standard and small target datasets, respectively. To ascertain the practical viability of the refined YOLOv5 algorithm, this study introduces a temperature-based flame detection algorithm. By amalgamating and deploying both algorithms, the ultimate experimental outcomes reveal that the integrated algorithm not only elevates accuracy but also achieves a frame rate of 57 frames, aligning with the prerequisites for practical deployment.","PeriodicalId":508952,"journal":{"name":"Fire","volume":"27 10","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Fire","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/fire7070241","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With the advancement of society and the rapid urbanization process, there is an escalating need for effective fire detection systems. This study endeavors to bolster the efficacy and dependability of fire detection systems in intricate settings by refining the existing You Only Look Once version 5 (YOLOv5) algorithm and introducing algorithms grounded on fire characteristics. Primarily, the Convolutional Block Attention Module (CBAM) attention mechanism is introduced to steer the model towards substantial features, thereby amplifying detection precision. Subsequently, a multi-scale feature fusion network, employing the Adaptive Spatial Feature Fusion Module (ASFF), is embraced to proficiently amalgamate feature information from various scales, thereby enhancing the model’s comprehension of image content and subsequently fortifying detection resilience. Moreover, refining the loss function and integrating a larger detection head further fortify the model’s capability to discern diminutive targets. Experimental findings illustrate that the refined YOLOv5 algorithm attains accuracy advancements of 8% and 8.2% on standard and small target datasets, respectively. To ascertain the practical viability of the refined YOLOv5 algorithm, this study introduces a temperature-based flame detection algorithm. By amalgamating and deploying both algorithms, the ultimate experimental outcomes reveal that the integrated algorithm not only elevates accuracy but also achieves a frame rate of 57 frames, aligning with the prerequisites for practical deployment.
随着社会的进步和城市化进程的加快,对有效火灾探测系统的需求也在不断增长。本研究通过改进现有的 You Only Look Once version 5(YOLOv5)算法和引入基于火灾特征的算法,努力提高复杂环境下火灾探测系统的有效性和可靠性。首先,引入卷积块注意模块(CBAM)注意机制,引导模型转向实质性特征,从而提高检测精度。随后,采用自适应空间特征融合模块(ASFF)的多尺度特征融合网络,可熟练地融合来自不同尺度的特征信息,从而提高模型对图像内容的理解能力,进而增强检测的弹性。此外,改进损失函数和集成更大的检测头进一步增强了模型辨别微小目标的能力。实验结果表明,改进后的 YOLOv5 算法在标准和小型目标数据集上的准确率分别提高了 8% 和 8.2%。为了确定改进后的 YOLOv5 算法的实际可行性,本研究引入了基于温度的火焰检测算法。通过合并和部署这两种算法,最终的实验结果表明,整合后的算法不仅提高了准确性,还实现了 57 帧的帧率,符合实际部署的先决条件。