Conditional image-to-image translation generative adversarial network (cGAN) for fabric defect data augmentation

Swash Sami Mohammed, Hülya Gökalp Clarke
{"title":"Conditional image-to-image translation generative adversarial network (cGAN) for fabric defect data augmentation","authors":"Swash Sami Mohammed, Hülya Gökalp Clarke","doi":"10.1007/s00521-024-10179-1","DOIUrl":null,"url":null,"abstract":"<p>The availability of comprehensive datasets is a crucial challenge for developing artificial intelligence (AI) models in various applications and fields. The lack of large and diverse public fabric defect datasets forms a major obstacle to properly and accurately developing and training AI models for detecting and classifying fabric defects in real-life applications. Models trained on limited datasets struggle to identify underrepresented defects, reducing their practicality. To address these issues, this study suggests using a conditional generative adversarial network (cGAN) for fabric defect data augmentation. The proposed image-to-image translator GAN features a conditional U-Net generator and a 6-layered PatchGAN discriminator. The conditional U-Network (U-Net) generator can produce highly realistic synthetic defective samples and offers the ability to control various characteristics of the generated samples by taking two input images: a segmented defect mask and a clean fabric image. The segmented defect mask provides information about various aspects of the defects to be added to the clean fabric sample, including their type, shape, size, and location. By augmenting the training dataset with diverse and realistic synthetic samples, the AI models can learn to identify a broader range of defects more accurately. This technique helps overcome the limitations of small or unvaried datasets, leading to improved defect detection accuracy and generalizability. Moreover, this proposed augmentation method can find applications in other challenging fields, such as generating synthetic samples for medical imaging datasets related to brain and lung tumors.</p>","PeriodicalId":18925,"journal":{"name":"Neural Computing and Applications","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Computing and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00521-024-10179-1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The availability of comprehensive datasets is a crucial challenge for developing artificial intelligence (AI) models in various applications and fields. The lack of large and diverse public fabric defect datasets forms a major obstacle to properly and accurately developing and training AI models for detecting and classifying fabric defects in real-life applications. Models trained on limited datasets struggle to identify underrepresented defects, reducing their practicality. To address these issues, this study suggests using a conditional generative adversarial network (cGAN) for fabric defect data augmentation. The proposed image-to-image translator GAN features a conditional U-Net generator and a 6-layered PatchGAN discriminator. The conditional U-Network (U-Net) generator can produce highly realistic synthetic defective samples and offers the ability to control various characteristics of the generated samples by taking two input images: a segmented defect mask and a clean fabric image. The segmented defect mask provides information about various aspects of the defects to be added to the clean fabric sample, including their type, shape, size, and location. By augmenting the training dataset with diverse and realistic synthetic samples, the AI models can learn to identify a broader range of defects more accurately. This technique helps overcome the limitations of small or unvaried datasets, leading to improved defect detection accuracy and generalizability. Moreover, this proposed augmentation method can find applications in other challenging fields, such as generating synthetic samples for medical imaging datasets related to brain and lung tumors.

Abstract Image

用于织物缺陷数据增强的条件图像到图像转换生成对抗网络 (cGAN)
在各种应用和领域开发人工智能(AI)模型时,能否获得全面的数据集是一个至关重要的挑战。缺乏大型、多样化的公共织物缺陷数据集,是在实际应用中正确、准确地开发和训练用于检测和分类织物缺陷的人工智能模型的主要障碍。在有限数据集上训练的模型难以识别代表性不足的缺陷,降低了其实用性。为了解决这些问题,本研究建议使用条件生成对抗网络(cGAN)来增强织物缺陷数据。所提出的图像到图像转换 GAN 具有条件 U-Net 生成器和 6 层 PatchGAN 识别器。条件 U-Net 生成器可生成高度逼真的合成疵点样本,并能通过获取两幅输入图像来控制生成样本的各种特征:一幅是分割后的疵点掩膜,另一幅是干净的织物图像。分段缺陷掩膜提供了要添加到干净织物样本中的缺陷的各方面信息,包括缺陷的类型、形状、大小和位置。通过使用多样化的真实合成样本来增强训练数据集,人工智能模型可以学会更准确地识别更广泛的缺陷。这种技术有助于克服小数据集或无差异数据集的局限性,从而提高缺陷检测的准确性和通用性。此外,这种拟议的增强方法还可应用于其他具有挑战性的领域,例如为与脑肿瘤和肺肿瘤相关的医学成像数据集生成合成样本。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信