A novel RCACycleGAN model is proposed for the high-precision reconstruction of sparse TFM images

Zhouteng Liu, Liming Li, Wenfa Zhu, Yanxun Xiang, Guopeng Fan, Hui Zhang
{"title":"A novel RCACycleGAN model is proposed for the high-precision reconstruction of sparse TFM images","authors":"Zhouteng Liu, Liming Li, Wenfa Zhu, Yanxun Xiang, Guopeng Fan, Hui Zhang","doi":"10.1784/insi.2024.66.5.272","DOIUrl":null,"url":null,"abstract":"The sparse total focusing method (TFM) has been shown to enhance the computational efficacy of ultrasound imaging but the image quality of ultrasound regrettably deteriorates with an increase in the sparsity rate of array elements. Deep learning has made remarkable advancements in image\n processing and cycle-consistent generative adversarial networks (CycleGANs) have been extensively employed to reconstruct diverse image categories. However, due to the incomplete extraction of image feature information by the generator and discriminator in a CycleGAN, high-quality sparse TFM\n images cannot be directly reconstructed using CycleGANs. There is also a risk of losing crucial feature information related to minor defects. As a result, this paper modifies the generator and discriminator in the CycleGAN to construct a new relativistic discriminator and coordinate attention\n CycleGAN (RCACycleGAN) model, which enables high-precision reconstruction of sparse TFM images. The addition of the coordinate attention module to the CycleGAN enhances the defective feature representation by fully considering the channel and spatial correlation between regions and using the\n fusion of spatially perceived feature maps in different directions. It solves the problem of easy loss of defective key feature information. The relativistic discriminator replaces the PatchGAN discriminator in the CycleGAN and evaluates the quality of both real and sparse TFM reconstructed\n images to ensure a relative image quality evaluation. This process solves the problem of unstable image quality of the sparse TFM reconstructed image. Experimental results demonstrate that RCACycleGAN can stably reconstruct sparse TFM images even in small sample dataset scenarios. The proposed\n network model reconstructs images with better accuracy, including in terms of structural similarity, defect roundness and area, and has a shorter training time than several existing network models.","PeriodicalId":506650,"journal":{"name":"Insight - Non-Destructive Testing and Condition Monitoring","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Insight - Non-Destructive Testing and Condition Monitoring","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1784/insi.2024.66.5.272","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The sparse total focusing method (TFM) has been shown to enhance the computational efficacy of ultrasound imaging but the image quality of ultrasound regrettably deteriorates with an increase in the sparsity rate of array elements. Deep learning has made remarkable advancements in image processing and cycle-consistent generative adversarial networks (CycleGANs) have been extensively employed to reconstruct diverse image categories. However, due to the incomplete extraction of image feature information by the generator and discriminator in a CycleGAN, high-quality sparse TFM images cannot be directly reconstructed using CycleGANs. There is also a risk of losing crucial feature information related to minor defects. As a result, this paper modifies the generator and discriminator in the CycleGAN to construct a new relativistic discriminator and coordinate attention CycleGAN (RCACycleGAN) model, which enables high-precision reconstruction of sparse TFM images. The addition of the coordinate attention module to the CycleGAN enhances the defective feature representation by fully considering the channel and spatial correlation between regions and using the fusion of spatially perceived feature maps in different directions. It solves the problem of easy loss of defective key feature information. The relativistic discriminator replaces the PatchGAN discriminator in the CycleGAN and evaluates the quality of both real and sparse TFM reconstructed images to ensure a relative image quality evaluation. This process solves the problem of unstable image quality of the sparse TFM reconstructed image. Experimental results demonstrate that RCACycleGAN can stably reconstruct sparse TFM images even in small sample dataset scenarios. The proposed network model reconstructs images with better accuracy, including in terms of structural similarity, defect roundness and area, and has a shorter training time than several existing network models.
为稀疏 TFM 图像的高精度重建提出了一种新型 RCACycleGAN 模型
稀疏全聚焦法(TFM)已被证明可提高超声成像的计算效率,但令人遗憾的是,随着阵列元素稀疏率的增加,超声图像的质量也会下降。深度学习在图像处理领域取得了显著进步,循环一致性生成对抗网络(CycleGANs)已被广泛用于重建不同类别的图像。然而,由于 CycleGAN 中的生成器和判别器对图像特征信息的提取不完整,因此无法直接使用 CycleGAN 重构高质量的稀疏 TFM 图像。此外,还存在丢失与细微缺陷相关的重要特征信息的风险。因此,本文修改了 CycleGAN 中的发生器和判别器,构建了一个新的相对论判别器和坐标注意 CycleGAN(RCACycleGAN)模型,从而实现了稀疏 TFM 图像的高精度重建。在 CycleGAN 中加入坐标注意模块后,通过充分考虑区域间的通道和空间相关性,并利用不同方向的空间感知特征图的融合,增强了缺陷特征表示。它解决了缺陷关键特征信息容易丢失的问题。相对论判别器取代了 CycleGAN 中的 PatchGAN 判别器,对真实图像和稀疏 TFM 重建图像的质量进行评估,以确保相对的图像质量评估。这一过程解决了稀疏 TFM 重建图像质量不稳定的问题。实验结果表明,即使在小样本数据集的情况下,RCACycleGAN 也能稳定地重建稀疏 TFM 图像。与现有的几种网络模型相比,所提出的网络模型能以更高的精度重建图像,包括结构相似性、缺陷圆度和面积,并且训练时间更短。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信