Fusion of thermal and RGB images for automated deep learning based crack detection in civil infrastructure

Quincy G. Alexander, Vedhus Hoskere, Yasutaka Narazaki, Andrew Maxwell, Billie F. Spencer Jr
{"title":"Fusion of thermal and RGB images for automated deep learning based crack detection in civil infrastructure","authors":"Quincy G. Alexander,&nbsp;Vedhus Hoskere,&nbsp;Yasutaka Narazaki,&nbsp;Andrew Maxwell,&nbsp;Billie F. Spencer Jr","doi":"10.1007/s43503-022-00002-y","DOIUrl":null,"url":null,"abstract":"<div><p>Research has been continually growing toward the development of image-based structural health monitoring tools that can leverage deep learning models to automate damage detection in civil infrastructure. However, these tools are typically based on RGB images, which work well under ideal lighting conditions, but often have degrading performance in poor and low-light scenes. On the other hand, thermal images, while lacking in crispness of details, do not show the same degradation of performance in changing lighting conditions. The potential to enhance automated damage detection by fusing RGB and thermal images together within a deep learning network has yet to be explored. In this paper, RGB and thermal images are fused in a ResNET-based semantic segmentation model for vision-based inspections. A convolutional neural network is then employed to automatically identify damage defects in concrete. The model uses a thermal and RGB encoder to combine the features detected from both spectrums to improve its performance of the model, and a single decoder to predict the classes. The results suggest that this RGB-thermal fusion network outperforms the RGB-only network in the detection of cracks using the Intersection Over Union (IOU) performance metric. The RGB-thermal fusion model not only detected damage at a higher performance rate, but it also performed much better in differentiating the types of damage.</p></div>","PeriodicalId":72138,"journal":{"name":"AI in civil engineering","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI in civil engineering","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43503-022-00002-y","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

Research has been continually growing toward the development of image-based structural health monitoring tools that can leverage deep learning models to automate damage detection in civil infrastructure. However, these tools are typically based on RGB images, which work well under ideal lighting conditions, but often have degrading performance in poor and low-light scenes. On the other hand, thermal images, while lacking in crispness of details, do not show the same degradation of performance in changing lighting conditions. The potential to enhance automated damage detection by fusing RGB and thermal images together within a deep learning network has yet to be explored. In this paper, RGB and thermal images are fused in a ResNET-based semantic segmentation model for vision-based inspections. A convolutional neural network is then employed to automatically identify damage defects in concrete. The model uses a thermal and RGB encoder to combine the features detected from both spectrums to improve its performance of the model, and a single decoder to predict the classes. The results suggest that this RGB-thermal fusion network outperforms the RGB-only network in the detection of cracks using the Intersection Over Union (IOU) performance metric. The RGB-thermal fusion model not only detected damage at a higher performance rate, but it also performed much better in differentiating the types of damage.

用于民用基础设施中基于深度学习的自动裂纹检测的热图像和RGB图像融合
基于图像的结构健康监测工具的研究一直在不断发展,这些工具可以利用深度学习模型来自动检测民用基础设施的损伤。然而,这些工具通常基于RGB图像,在理想的照明条件下工作良好,但在光线不足的场景中往往性能下降。另一方面,热成像虽然缺乏细节的清晰度,但在改变照明条件下不会显示出同样的性能下降。通过在深度学习网络中融合RGB和热图像来增强自动损伤检测的潜力尚未得到探索。本文将RGB图像和热图像融合到基于resnet的语义分割模型中,用于基于视觉的检测。然后利用卷积神经网络自动识别混凝土损伤缺陷。该模型使用热和RGB编码器来结合从两个光谱检测到的特征,以提高其模型的性能,并使用单个解码器来预测类别。结果表明,这种rgb -热融合网络在使用交汇联盟(Intersection Over Union, IOU)性能指标检测裂缝方面优于仅rgb网络。rgb -热融合模型不仅能以更高的性能检测损伤,而且在区分损伤类型方面也表现得更好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信