基于视觉变换的多模态图像融合对混凝土结构表面和地下损伤进行分割

IF 11.5 1区 工程技术 Q1 CONSTRUCTION & BUILDING TECHNOLOGY
Lokeswari Malepati , Vedhus Hoskere , Nagarajan Ganapathy , S. Suriya Prakash
{"title":"基于视觉变换的多模态图像融合对混凝土结构表面和地下损伤进行分割","authors":"Lokeswari Malepati ,&nbsp;Vedhus Hoskere ,&nbsp;Nagarajan Ganapathy ,&nbsp;S. Suriya Prakash","doi":"10.1016/j.autcon.2025.106469","DOIUrl":null,"url":null,"abstract":"<div><div>Semantic segmentation of multimodal images combining visible and infrared spectra enables quantification of both surface and subsurface damage in concrete structures. High-quality segmentation, however, hinges on precise cross-modal registration and an effective fusion strategy. Sparse feature similarity across these modalities typically observed in real-world infrastructure images, limits the effectiveness and generalizability of existing registration algorithms. To overcome this limitation, this paper proposes a new multi-modal image registration algorithm that narrows the search space leveraging epipolar constraint and employs a modified multi-scale mutual-information metric for robust feature matching. Tests on a purpose-built dataset show the method surpasses state-of-the-art registration algorithms. The paper also evaluates how fusion schemes and loss functions affect segmentation performance, revealing that a combined loss function (i.e., OHEM cross entropy and Generalized Dice Loss) paired with an early-fusion strategy yields the highest mean Intersection-over-Union. These contributions advance a comprehensive framework for automated damage segmentation in multimodal imagery.</div></div>","PeriodicalId":8660,"journal":{"name":"Automation in Construction","volume":"179 ","pages":"Article 106469"},"PeriodicalIF":11.5000,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Segmentation of surface and subsurface damages in concrete structures through fusion of multi-modal images using vision transformer\",\"authors\":\"Lokeswari Malepati ,&nbsp;Vedhus Hoskere ,&nbsp;Nagarajan Ganapathy ,&nbsp;S. Suriya Prakash\",\"doi\":\"10.1016/j.autcon.2025.106469\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Semantic segmentation of multimodal images combining visible and infrared spectra enables quantification of both surface and subsurface damage in concrete structures. High-quality segmentation, however, hinges on precise cross-modal registration and an effective fusion strategy. Sparse feature similarity across these modalities typically observed in real-world infrastructure images, limits the effectiveness and generalizability of existing registration algorithms. To overcome this limitation, this paper proposes a new multi-modal image registration algorithm that narrows the search space leveraging epipolar constraint and employs a modified multi-scale mutual-information metric for robust feature matching. Tests on a purpose-built dataset show the method surpasses state-of-the-art registration algorithms. The paper also evaluates how fusion schemes and loss functions affect segmentation performance, revealing that a combined loss function (i.e., OHEM cross entropy and Generalized Dice Loss) paired with an early-fusion strategy yields the highest mean Intersection-over-Union. These contributions advance a comprehensive framework for automated damage segmentation in multimodal imagery.</div></div>\",\"PeriodicalId\":8660,\"journal\":{\"name\":\"Automation in Construction\",\"volume\":\"179 \",\"pages\":\"Article 106469\"},\"PeriodicalIF\":11.5000,\"publicationDate\":\"2025-08-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Automation in Construction\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0926580525005096\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"CONSTRUCTION & BUILDING TECHNOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Automation in Construction","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0926580525005096","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CONSTRUCTION & BUILDING TECHNOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

结合可见光和红外光谱的多模态图像的语义分割可以量化混凝土结构的表面和亚表面损伤。然而,高质量的分割取决于精确的跨模态配准和有效的融合策略。这些模式之间的稀疏特征相似性通常在现实世界的基础设施图像中观察到,限制了现有配准算法的有效性和可泛化性。为了克服这一限制,本文提出了一种新的多模态图像配准算法,该算法利用极域约束缩小搜索空间,并采用改进的多尺度互信息度量进行鲁棒特征匹配。在专门构建的数据集上的测试表明,该方法优于最先进的配准算法。本文还评估了融合方案和损失函数如何影响分割性能,揭示了一个组合的损失函数(即OHEM交叉熵和广义骰子损失)与早期融合策略配对产生最高的平均交集-过并。这些贡献为多模态图像的自动损伤分割提供了一个全面的框架。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Segmentation of surface and subsurface damages in concrete structures through fusion of multi-modal images using vision transformer
Semantic segmentation of multimodal images combining visible and infrared spectra enables quantification of both surface and subsurface damage in concrete structures. High-quality segmentation, however, hinges on precise cross-modal registration and an effective fusion strategy. Sparse feature similarity across these modalities typically observed in real-world infrastructure images, limits the effectiveness and generalizability of existing registration algorithms. To overcome this limitation, this paper proposes a new multi-modal image registration algorithm that narrows the search space leveraging epipolar constraint and employs a modified multi-scale mutual-information metric for robust feature matching. Tests on a purpose-built dataset show the method surpasses state-of-the-art registration algorithms. The paper also evaluates how fusion schemes and loss functions affect segmentation performance, revealing that a combined loss function (i.e., OHEM cross entropy and Generalized Dice Loss) paired with an early-fusion strategy yields the highest mean Intersection-over-Union. These contributions advance a comprehensive framework for automated damage segmentation in multimodal imagery.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Automation in Construction
Automation in Construction 工程技术-工程:土木
CiteScore
19.20
自引率
16.50%
发文量
563
审稿时长
8.5 months
期刊介绍: Automation in Construction is an international journal that focuses on publishing original research papers related to the use of Information Technologies in various aspects of the construction industry. The journal covers topics such as design, engineering, construction technologies, and the maintenance and management of constructed facilities. The scope of Automation in Construction is extensive and covers all stages of the construction life cycle. This includes initial planning and design, construction of the facility, operation and maintenance, as well as the eventual dismantling and recycling of buildings and engineering structures.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信