MsFireD-Net: A lightweight and efficient convolutional neural network for flame and smoke segmentation

F.M. Anim Hossain, Youmin Zhang
{"title":"MsFireD-Net: A lightweight and efficient convolutional neural network for flame and smoke segmentation","authors":"F.M. Anim Hossain,&nbsp;Youmin Zhang","doi":"10.1016/j.jai.2023.08.003","DOIUrl":null,"url":null,"abstract":"<div><p>With the rising frequency and severity of wildfires across the globe, researchers have been actively searching for a reliable solution for early-stage forest fire detection. In recent years, Convolutional Neural Networks (CNNs) have demonstrated outstanding performances in computer vision-based object detection tasks, including forest fire detection. Using CNNs to detect forest fires by segmenting both flame and smoke pixels not only can provide early and accurate detection but also additional information such as the size, spread, location, and movement of the fire. However, CNN-based segmentation networks are computationally demanding and can be difficult to incorporate onboard lightweight mobile platforms, such as an Uncrewed Aerial Vehicle (UAV). To address this issue, this paper has proposed a new efficient upsampling technique based on transposed convolution to make segmentation CNNs lighter. This proposed technique, named Reversed Depthwise Separable Transposed Convolution (RDSTC), achieved F1-scores of 0.78 for smoke and 0.74 for flame, outperforming U-Net networks with bilinear upsampling, transposed convolution, and CARAFE upsampling. Additionally, a Multi-signature Fire Detection Network (MsFireD-Net) has been proposed in this paper, having 93% fewer parameters and 94% fewer computations than the RDSTC U-Net. Despite being such a lightweight and efficient network, MsFireD-Net has demonstrated strong results against the other U-Net-based networks.</p></div>","PeriodicalId":100755,"journal":{"name":"Journal of Automation and Intelligence","volume":"2 3","pages":"Pages 130-138"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Automation and Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949855423000345","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

With the rising frequency and severity of wildfires across the globe, researchers have been actively searching for a reliable solution for early-stage forest fire detection. In recent years, Convolutional Neural Networks (CNNs) have demonstrated outstanding performances in computer vision-based object detection tasks, including forest fire detection. Using CNNs to detect forest fires by segmenting both flame and smoke pixels not only can provide early and accurate detection but also additional information such as the size, spread, location, and movement of the fire. However, CNN-based segmentation networks are computationally demanding and can be difficult to incorporate onboard lightweight mobile platforms, such as an Uncrewed Aerial Vehicle (UAV). To address this issue, this paper has proposed a new efficient upsampling technique based on transposed convolution to make segmentation CNNs lighter. This proposed technique, named Reversed Depthwise Separable Transposed Convolution (RDSTC), achieved F1-scores of 0.78 for smoke and 0.74 for flame, outperforming U-Net networks with bilinear upsampling, transposed convolution, and CARAFE upsampling. Additionally, a Multi-signature Fire Detection Network (MsFireD-Net) has been proposed in this paper, having 93% fewer parameters and 94% fewer computations than the RDSTC U-Net. Despite being such a lightweight and efficient network, MsFireD-Net has demonstrated strong results against the other U-Net-based networks.

MsFireD-Net:用于火焰和烟雾分割的轻量级高效卷积神经网络
随着全球野火的频率和严重程度不断上升,研究人员一直在积极寻找早期森林火灾探测的可靠解决方案。近年来,卷积神经网络在基于计算机视觉的目标检测任务中表现出了出色的性能,包括森林火灾检测。使用细胞神经网络通过分割火焰和烟雾像素来检测森林火灾,不仅可以提供早期准确的检测,还可以提供火灾的大小、蔓延、位置和移动等额外信息。然而,基于CNN的分割网络在计算上要求很高,并且很难结合机载轻型移动平台,例如无人机。为了解决这个问题,本文提出了一种新的基于转置卷积的高效上采样技术,以使分割CNN更轻。这种被称为反向深度可分离转置卷积(RDSTC)的技术,在烟雾和火焰中分别获得了0.78和0.74的F1分数,优于具有双线性上采样、转置卷积和CARAFE上采样的U-Net网络。此外,本文还提出了一种多特征火灾探测网络(MsFireD-Net),与RDSTC U-Net相比,该网络的参数和计算量减少了93%。尽管MsFireD-Net是一个轻量级和高效的网络,但与其他基于U-Net的网络相比,它已经显示出了强大的效果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信