基于生成对抗网络的结构损伤图像间翻译

Subin Varghese, Rebecca Wang, Vedhus Hoskere
{"title":"基于生成对抗网络的结构损伤图像间翻译","authors":"Subin Varghese, Rebecca Wang, Vedhus Hoskere","doi":"10.12783/shm2021/36307","DOIUrl":null,"url":null,"abstract":"In the aftermath of earthquakes, structures can become unsafe and hazardous for humans to safely reside. Automated methods that detect structural damage can be invaluable for rapid inspections and faster recovery times. Deep neural networks (DNNs) have proven to be an effective means to classify damaged areas in images of structures but have limited generalizability due to the lack of large and diverse annotated datasets (e.g., variations in building properties like size, shape, color). Given a dataset of paired images of damaged and undamaged structures supervised deep learning methods could be employed, but such paired correspondences of images required for training are exceedingly difficult to acquire. Obtaining a variety of undamaged images, and a smaller set of damaged images is more viable. We present a novel application of deep learning for unpaired image-to-image translation between undamaged and damaged structures as a means of data augmentation to combat the lack of diverse data. Unpaired image-to-image translation is achieved using Cycle Consistent Adversarial Network (CCAN) architectures, which have the capability to translate images while retaining the geometric structure of an image. We explore the capability of the original CCAN architecture, and propose a new architecture for unpaired image-to-image translation (termed Eigen Integrated Generative Adversarial Network or EIGAN) that addresses shortcomings of the original architecture for our application. We create a new unpaired dataset to translate an image between domains of damaged and undamaged structures. The dataset created consists of a set of damaged and undamaged buildings from Mexico City affected by the 2017 Puebla earthquake. Qualitative and quantitative results of the various architectures are presented to better compare the quality of the translated images. A comparison is also done on the performance of DNNs trained to classify damaged structures using generated images. The results demonstrate that targeted image-to-image translation of undamaged to damaged structures is an effective means of data augmentation to improve network performance.","PeriodicalId":180083,"journal":{"name":"Proceedings of the 13th International Workshop on Structural Health Monitoring","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"IMAGE TO IMAGE TRANSLATION OF STRUCTURAL DAMAGE USING GENERATIVE ADVERSARIAL NETWORKS\",\"authors\":\"Subin Varghese, Rebecca Wang, Vedhus Hoskere\",\"doi\":\"10.12783/shm2021/36307\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the aftermath of earthquakes, structures can become unsafe and hazardous for humans to safely reside. Automated methods that detect structural damage can be invaluable for rapid inspections and faster recovery times. Deep neural networks (DNNs) have proven to be an effective means to classify damaged areas in images of structures but have limited generalizability due to the lack of large and diverse annotated datasets (e.g., variations in building properties like size, shape, color). Given a dataset of paired images of damaged and undamaged structures supervised deep learning methods could be employed, but such paired correspondences of images required for training are exceedingly difficult to acquire. Obtaining a variety of undamaged images, and a smaller set of damaged images is more viable. We present a novel application of deep learning for unpaired image-to-image translation between undamaged and damaged structures as a means of data augmentation to combat the lack of diverse data. Unpaired image-to-image translation is achieved using Cycle Consistent Adversarial Network (CCAN) architectures, which have the capability to translate images while retaining the geometric structure of an image. We explore the capability of the original CCAN architecture, and propose a new architecture for unpaired image-to-image translation (termed Eigen Integrated Generative Adversarial Network or EIGAN) that addresses shortcomings of the original architecture for our application. We create a new unpaired dataset to translate an image between domains of damaged and undamaged structures. The dataset created consists of a set of damaged and undamaged buildings from Mexico City affected by the 2017 Puebla earthquake. Qualitative and quantitative results of the various architectures are presented to better compare the quality of the translated images. A comparison is also done on the performance of DNNs trained to classify damaged structures using generated images. The results demonstrate that targeted image-to-image translation of undamaged to damaged structures is an effective means of data augmentation to improve network performance.\",\"PeriodicalId\":180083,\"journal\":{\"name\":\"Proceedings of the 13th International Workshop on Structural Health Monitoring\",\"volume\":\"26 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 13th International Workshop on Structural Health Monitoring\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.12783/shm2021/36307\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 13th International Workshop on Structural Health Monitoring","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.12783/shm2021/36307","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

在地震之后,建筑物可能变得不安全,对人类的安全居住构成危险。检测结构损坏的自动化方法对于快速检查和更快的恢复时间来说是非常宝贵的。深度神经网络(dnn)已被证明是对结构图像中受损区域进行分类的有效手段,但由于缺乏大型和多样化的注释数据集(例如,建筑属性的变化,如大小,形状,颜色),其泛化性有限。给定一个受损和未受损结构的成对图像数据集,可以采用监督深度学习方法,但训练所需的这种图像的成对对应非常难以获得。获得各种未损坏的图像,而较小的损坏图像集更可行。我们提出了一种新的深度学习应用,用于未受损和受损结构之间的未配对图像到图像转换,作为数据增强的一种手段,以对抗缺乏多样化的数据。使用循环一致对抗网络(CCAN)架构实现非配对图像到图像的转换,该架构具有在保留图像几何结构的同时翻译图像的能力。我们探索了原始CCAN架构的能力,并提出了一种用于非配对图像到图像转换的新架构(称为Eigen集成生成对抗网络或EIGAN),该架构解决了我们应用程序中原始架构的缺点。我们创建了一个新的非配对数据集,在受损和未受损结构的域之间转换图像。该数据集由一组受2017年普埃布拉地震影响的墨西哥城受损和未受损建筑组成。为了更好地比较翻译图像的质量,给出了各种架构的定性和定量结果。我们还比较了dnn训练后使用生成的图像对受损结构进行分类的性能。结果表明,对未受损结构进行有针对性的图像到图像转换是提高网络性能的有效手段。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
IMAGE TO IMAGE TRANSLATION OF STRUCTURAL DAMAGE USING GENERATIVE ADVERSARIAL NETWORKS
In the aftermath of earthquakes, structures can become unsafe and hazardous for humans to safely reside. Automated methods that detect structural damage can be invaluable for rapid inspections and faster recovery times. Deep neural networks (DNNs) have proven to be an effective means to classify damaged areas in images of structures but have limited generalizability due to the lack of large and diverse annotated datasets (e.g., variations in building properties like size, shape, color). Given a dataset of paired images of damaged and undamaged structures supervised deep learning methods could be employed, but such paired correspondences of images required for training are exceedingly difficult to acquire. Obtaining a variety of undamaged images, and a smaller set of damaged images is more viable. We present a novel application of deep learning for unpaired image-to-image translation between undamaged and damaged structures as a means of data augmentation to combat the lack of diverse data. Unpaired image-to-image translation is achieved using Cycle Consistent Adversarial Network (CCAN) architectures, which have the capability to translate images while retaining the geometric structure of an image. We explore the capability of the original CCAN architecture, and propose a new architecture for unpaired image-to-image translation (termed Eigen Integrated Generative Adversarial Network or EIGAN) that addresses shortcomings of the original architecture for our application. We create a new unpaired dataset to translate an image between domains of damaged and undamaged structures. The dataset created consists of a set of damaged and undamaged buildings from Mexico City affected by the 2017 Puebla earthquake. Qualitative and quantitative results of the various architectures are presented to better compare the quality of the translated images. A comparison is also done on the performance of DNNs trained to classify damaged structures using generated images. The results demonstrate that targeted image-to-image translation of undamaged to damaged structures is an effective means of data augmentation to improve network performance.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信