Natural Disaster Damage Assessment using Semantic Segmentation of UAV Imagery

Muhammad Haroon Asad, Malik Muhammad Asim, Muhammad Naeem Mumtaz Awan, M. Yousaf
{"title":"Natural Disaster Damage Assessment using Semantic Segmentation of UAV Imagery","authors":"Muhammad Haroon Asad, Malik Muhammad Asim, Muhammad Naeem Mumtaz Awan, M. Yousaf","doi":"10.1109/ICRAI57502.2023.10089539","DOIUrl":null,"url":null,"abstract":"Numerous natural disasters due to climate change pose major threats to the sustainability of public infrastructure and human lives. For emergency rescue and recovery during a disaster, a rapid and accurate evaluation of disaster damage is essential. In recent years, the Transformer has gained popularity in a number of tasks related to computer vision, which offers tremendous potential for improving the accuracy of disaster damage assessments. Our research aims to determine whether Vision Transformer (ViT) can be used to assess natural disaster damage on high-resolution Unmanned Aerial Vehicle (UAV) data in comparison with conventional deep-learning semantic segmentation techniques. We discuss if Transformer can perform better than CNNs in accurately assessing the damage caused in order to bridge the gap. Detailed performance comparison of state-of-art deep learning semantic segmentation models (UNET, Segnet, PSPNet, Deeplabv3+) and Transformer framework (SegFormer) for damage assessment is presented. The experimentation is performed on both natural disaster damage datasets (RescueNet, FloodNet). The study supported SegFormer as the most appropriate model for estimating disaster damage, with mIoUs of 96% on the RescueNet dataset and 82.22% on the FloodNet dataset, respectively. The Transformer is capable of outperforming conventional segmentation CNNs in understanding the entirety of the scene and assessing the severity of the damage, based on both quantitative evaluation and visual results.","PeriodicalId":447565,"journal":{"name":"2023 International Conference on Robotics and Automation in Industry (ICRAI)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Robotics and Automation in Industry (ICRAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICRAI57502.2023.10089539","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Numerous natural disasters due to climate change pose major threats to the sustainability of public infrastructure and human lives. For emergency rescue and recovery during a disaster, a rapid and accurate evaluation of disaster damage is essential. In recent years, the Transformer has gained popularity in a number of tasks related to computer vision, which offers tremendous potential for improving the accuracy of disaster damage assessments. Our research aims to determine whether Vision Transformer (ViT) can be used to assess natural disaster damage on high-resolution Unmanned Aerial Vehicle (UAV) data in comparison with conventional deep-learning semantic segmentation techniques. We discuss if Transformer can perform better than CNNs in accurately assessing the damage caused in order to bridge the gap. Detailed performance comparison of state-of-art deep learning semantic segmentation models (UNET, Segnet, PSPNet, Deeplabv3+) and Transformer framework (SegFormer) for damage assessment is presented. The experimentation is performed on both natural disaster damage datasets (RescueNet, FloodNet). The study supported SegFormer as the most appropriate model for estimating disaster damage, with mIoUs of 96% on the RescueNet dataset and 82.22% on the FloodNet dataset, respectively. The Transformer is capable of outperforming conventional segmentation CNNs in understanding the entirety of the scene and assessing the severity of the damage, based on both quantitative evaluation and visual results.
基于无人机图像语义分割的自然灾害损害评估
气候变化引起的众多自然灾害对公共基础设施的可持续性和人类生活构成重大威胁。对于灾害期间的紧急救援和恢复,快速准确地评估灾害损害是必不可少的。近年来,Transformer在许多与计算机视觉相关的任务中获得了普及,这为提高灾害损害评估的准确性提供了巨大的潜力。我们的研究旨在确定视觉变压器(ViT)是否可以用于评估高分辨率无人机(UAV)数据的自然灾害损害,并与传统的深度学习语义分割技术进行比较。我们讨论了Transformer是否能比cnn更好地准确评估造成的损害,以弥合差距。详细介绍了用于损伤评估的深度学习语义分割模型(UNET、Segnet、PSPNet、Deeplabv3+)和Transformer框架(SegFormer)的性能比较。实验在两个自然灾害损害数据集(RescueNet, FloodNet)上进行。该研究支持SegFormer作为估算灾害损失的最合适模型,在RescueNet数据集和FloodNet数据集上的miou分别为96%和82.22%。Transformer能够基于定量评估和视觉结果,在理解整个场景和评估损坏严重程度方面优于传统的分割cnn。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信