基于空间-频率域融合的森林火灾图像去毛刺技术

IF 2.4 2区 农林科学 Q1 FORESTRY
Forests Pub Date : 2024-06-13 DOI:10.3390/f15061030
Xueyi Kong, Yunfei Liu, Ruipeng Han, Shuang Li, Han Liu
{"title":"基于空间-频率域融合的森林火灾图像去毛刺技术","authors":"Xueyi Kong, Yunfei Liu, Ruipeng Han, Shuang Li, Han Liu","doi":"10.3390/f15061030","DOIUrl":null,"url":null,"abstract":"UAVs are commonly used in forest fire detection, but the captured fire images often suffer from blurring due to the rapid motion between the airborne camera and the fire target. In this study, a multi-input, multi-output U-Net architecture that combines spatial domain and frequency domain information is proposed for image deblurring. The architecture includes a multi-branch dilated convolution attention residual module in the encoder to enhance receptive fields and address local features and texture detail limitations. A feature-fusion module integrating spatial frequency domains is also included in the skip connection structure to reduce feature loss and enhance deblurring performance. Additionally, a multi-channel convolution attention residual module in the decoders improves the reconstruction of local and contextual information. A weighted loss function is utilized to enhance network stability and generalization. Experimental results demonstrate that the proposed model outperforms popular models in terms of subjective perception and quantitative evaluation, achieving a PSNR of 32.26 dB, SSIM of 0.955, LGF of 10.93, and SMD of 34.31 on the self-built forest fire datasets and reaching 86% of the optimal PSNR and 87% of the optimal SSIM. In experiments without reference images, the model performs well in terms of LGF and SMD. The results obtained by this model are superior to the currently popular SRN and MPRNet models.","PeriodicalId":12339,"journal":{"name":"Forests","volume":null,"pages":null},"PeriodicalIF":2.4000,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Forest Fire Image Deblurring Based on Spatial–Frequency Domain Fusion\",\"authors\":\"Xueyi Kong, Yunfei Liu, Ruipeng Han, Shuang Li, Han Liu\",\"doi\":\"10.3390/f15061030\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"UAVs are commonly used in forest fire detection, but the captured fire images often suffer from blurring due to the rapid motion between the airborne camera and the fire target. In this study, a multi-input, multi-output U-Net architecture that combines spatial domain and frequency domain information is proposed for image deblurring. The architecture includes a multi-branch dilated convolution attention residual module in the encoder to enhance receptive fields and address local features and texture detail limitations. A feature-fusion module integrating spatial frequency domains is also included in the skip connection structure to reduce feature loss and enhance deblurring performance. Additionally, a multi-channel convolution attention residual module in the decoders improves the reconstruction of local and contextual information. A weighted loss function is utilized to enhance network stability and generalization. Experimental results demonstrate that the proposed model outperforms popular models in terms of subjective perception and quantitative evaluation, achieving a PSNR of 32.26 dB, SSIM of 0.955, LGF of 10.93, and SMD of 34.31 on the self-built forest fire datasets and reaching 86% of the optimal PSNR and 87% of the optimal SSIM. In experiments without reference images, the model performs well in terms of LGF and SMD. The results obtained by this model are superior to the currently popular SRN and MPRNet models.\",\"PeriodicalId\":12339,\"journal\":{\"name\":\"Forests\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2024-06-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Forests\",\"FirstCategoryId\":\"97\",\"ListUrlMain\":\"https://doi.org/10.3390/f15061030\",\"RegionNum\":2,\"RegionCategory\":\"农林科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"FORESTRY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Forests","FirstCategoryId":"97","ListUrlMain":"https://doi.org/10.3390/f15061030","RegionNum":2,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"FORESTRY","Score":null,"Total":0}
引用次数: 0

摘要

无人机常用于林火探测,但由于机载相机和火场目标之间的快速运动,捕捉到的火场图像往往会出现模糊现象。本研究提出了一种结合空间域和频域信息的多输入、多输出 U-Net 架构,用于图像去模糊。该架构包括编码器中的多分支扩张卷积注意残差模块,以增强感受野,解决局部特征和纹理细节限制问题。跳转连接结构中还包括一个整合空间频率域的特征融合模块,以减少特征损失,提高去模糊性能。此外,解码器中的多通道卷积注意残差模块可改善局部信息和上下文信息的重建。利用加权损失函数增强了网络的稳定性和泛化能力。实验结果表明,在主观感知和定量评估方面,所提出的模型优于流行的模型,在自建的森林火灾数据集上实现了 32.26 dB 的 PSNR、0.955 的 SSIM、10.93 的 LGF 和 34.31 的 SMD,达到了最佳 PSNR 的 86% 和最佳 SSIM 的 87%。在没有参考图像的实验中,该模型在 LGF 和 SMD 方面表现良好。该模型获得的结果优于目前流行的 SRN 和 MPRNet 模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Forest Fire Image Deblurring Based on Spatial–Frequency Domain Fusion
UAVs are commonly used in forest fire detection, but the captured fire images often suffer from blurring due to the rapid motion between the airborne camera and the fire target. In this study, a multi-input, multi-output U-Net architecture that combines spatial domain and frequency domain information is proposed for image deblurring. The architecture includes a multi-branch dilated convolution attention residual module in the encoder to enhance receptive fields and address local features and texture detail limitations. A feature-fusion module integrating spatial frequency domains is also included in the skip connection structure to reduce feature loss and enhance deblurring performance. Additionally, a multi-channel convolution attention residual module in the decoders improves the reconstruction of local and contextual information. A weighted loss function is utilized to enhance network stability and generalization. Experimental results demonstrate that the proposed model outperforms popular models in terms of subjective perception and quantitative evaluation, achieving a PSNR of 32.26 dB, SSIM of 0.955, LGF of 10.93, and SMD of 34.31 on the self-built forest fire datasets and reaching 86% of the optimal PSNR and 87% of the optimal SSIM. In experiments without reference images, the model performs well in terms of LGF and SMD. The results obtained by this model are superior to the currently popular SRN and MPRNet models.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Forests
Forests FORESTRY-
CiteScore
4.40
自引率
17.20%
发文量
1823
审稿时长
19.02 days
期刊介绍: Forests (ISSN 1999-4907) is an international and cross-disciplinary scholarly journal of forestry and forest ecology. It publishes research papers, short communications and review papers. There is no restriction on the length of the papers. Our aim is to encourage scientists to publish their experimental and theoretical research in as much detail as possible. Full experimental and/or methodical details must be provided for research articles.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信