Using computer vision to classify, locate and segment fire behavior in UAS-captured images

IF 5.7 Q1 ENVIRONMENTAL SCIENCES
Brett L. Lawrence , Emerson de Lemmus
{"title":"Using computer vision to classify, locate and segment fire behavior in UAS-captured images","authors":"Brett L. Lawrence ,&nbsp;Emerson de Lemmus","doi":"10.1016/j.srs.2024.100167","DOIUrl":null,"url":null,"abstract":"<div><div>The widely adaptable capabilities of artificial intelligence, in particular deep learning and computer vision have led to significant research output regarding flame and smoke detection. The composition of flame and smoke, also described as fire behavior, can be considerably different depending on factors like weather, fuels, and the specific landscape fire is being observed on. The ability to detect definable classes of fire behavior using computer vision has not been explored and could be helpful given it often dictates how firefighters respond to fire situations. To test whether types of fire behavior could be reliably classified, we collected and labeled a unique unmanned aerial system (UAS) image dataset of fire behavior classifications to be trained and validated using You Only Look Once (YOLO) detection models. Our 960 labeled images were sourced from over 21 h of UAS video collected during prescribed fire operations covering a large region of Texas and Louisiana, United States. National Wildfire Coordinating Group (NWCG) fire behavior observations and descriptions served as a reference for determining fire behavior classes during labeling. YOLOv8 models were trained on NWCG Rank 1–3 fire behavior descriptions in grassland, shrubland, forested, and combined fire regimes within our study area. Models were first trained and validated on classifying isolated image objects of fire behavior, and then separately trained to locate and segment fire behavior classifications in UAS images. Models trained to classify isolated image objects of fire behavior consistently performed at a mAP of 0.808 or higher, with combined fire regimes producing the best results (mAP = 0.897). Most segmentation models performed relatively poorly, except for the forest regime model at a box (locate) and mask (segment) mAP of 0.59 and 0.611, respectively. Our results indicate that classifying fire behavior with computer vision is possible in different fire regimes and fuel models, whereas locating and segmenting fire behavior types around background information is relatively difficult. However, it may be a manageable task with enough data, and when models are developed for a specific fire regime. With an increasing number of destructive wildfires and new challenges confronting fire managers, identifying how new technologies can quickly assess wildfire situations can assist wildfire responder awareness. Our conclusion is that levels of abstraction deeper than just detection of smoke or flame are possible using computer vision and could make even more detailed aerial fire monitoring possible using a UAS.</div></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"10 ","pages":"Article 100167"},"PeriodicalIF":5.7000,"publicationDate":"2024-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Science of Remote Sensing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666017224000518","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENVIRONMENTAL SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

The widely adaptable capabilities of artificial intelligence, in particular deep learning and computer vision have led to significant research output regarding flame and smoke detection. The composition of flame and smoke, also described as fire behavior, can be considerably different depending on factors like weather, fuels, and the specific landscape fire is being observed on. The ability to detect definable classes of fire behavior using computer vision has not been explored and could be helpful given it often dictates how firefighters respond to fire situations. To test whether types of fire behavior could be reliably classified, we collected and labeled a unique unmanned aerial system (UAS) image dataset of fire behavior classifications to be trained and validated using You Only Look Once (YOLO) detection models. Our 960 labeled images were sourced from over 21 h of UAS video collected during prescribed fire operations covering a large region of Texas and Louisiana, United States. National Wildfire Coordinating Group (NWCG) fire behavior observations and descriptions served as a reference for determining fire behavior classes during labeling. YOLOv8 models were trained on NWCG Rank 1–3 fire behavior descriptions in grassland, shrubland, forested, and combined fire regimes within our study area. Models were first trained and validated on classifying isolated image objects of fire behavior, and then separately trained to locate and segment fire behavior classifications in UAS images. Models trained to classify isolated image objects of fire behavior consistently performed at a mAP of 0.808 or higher, with combined fire regimes producing the best results (mAP = 0.897). Most segmentation models performed relatively poorly, except for the forest regime model at a box (locate) and mask (segment) mAP of 0.59 and 0.611, respectively. Our results indicate that classifying fire behavior with computer vision is possible in different fire regimes and fuel models, whereas locating and segmenting fire behavior types around background information is relatively difficult. However, it may be a manageable task with enough data, and when models are developed for a specific fire regime. With an increasing number of destructive wildfires and new challenges confronting fire managers, identifying how new technologies can quickly assess wildfire situations can assist wildfire responder awareness. Our conclusion is that levels of abstraction deeper than just detection of smoke or flame are possible using computer vision and could make even more detailed aerial fire monitoring possible using a UAS.
利用计算机视觉对无人机系统捕捉到的图像中的火灾行为进行分类、定位和分割
人工智能的广泛适应能力,特别是深度学习和计算机视觉,为火焰和烟雾探测带来了巨大的研究成果。火焰和烟雾的组成(也称为火灾行为)会因天气、燃料和观察火灾的具体地貌等因素的不同而大相径庭。利用计算机视觉技术检测可定义的火灾行为类别的能力尚未得到探索,而这种能力可能很有帮助,因为它往往决定了消防员如何应对火灾情况。为了测试火灾行为类型是否可以可靠地分类,我们收集并标注了一个独特的无人机系统(UAS)火灾行为分类图像数据集,以便使用 "只看一次"(YOLO)检测模型进行训练和验证。我们的 960 张标注图像来自在美国德克萨斯州和路易斯安那州的大片地区进行规定灭火行动期间收集的超过 21 小时的无人机系统视频。国家野火协调组(NWCG)的火灾行为观察和描述可作为在标注过程中确定火灾行为类别的参考。YOLOv8 模型是根据国家野火协调组 1-3 级火灾行为描述在我们的研究区域内的草地、灌木林、森林和综合火灾机制中进行训练的。首先对模型进行训练和验证,以对孤立的火灾行为图像对象进行分类,然后对模型进行单独训练,以定位和分割 UAS 图像中的火灾行为分类。在对孤立的火灾行为图像对象进行分类时,所训练的模型的 mAP 值始终保持在 0.808 或更高水平,而组合火灾机制的结果最好(mAP = 0.897)。大多数分割模型的表现相对较差,只有森林系统模型的方框(定位)和掩码(分割)mAP 分别为 0.59 和 0.611。我们的研究结果表明,利用计算机视觉技术对不同火势和燃料模型中的火灾行为进行分类是可行的,而围绕背景信息对火灾行为类型进行定位和分割则相对困难。不过,如果有足够的数据,并针对特定的火灾机制开发出相应的模型,这项任务还是可以完成的。随着破坏性野火的数量不断增加,火灾管理者面临着新的挑战,确定新技术如何快速评估野火情况有助于提高野火应对人员的意识。我们的结论是,利用计算机视觉可以实现比检测烟雾或火焰更深层次的抽象,并利用无人机系统实现更详细的空中火灾监测。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
12.20
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信