Incremental Learning for Defect Segmentation With Efficient Transformer Semantic Complement.

IF 8.9 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Xiqi Li,Zhifu Huang,Ge Ma,Yu Liu
{"title":"Incremental Learning for Defect Segmentation With Efficient Transformer Semantic Complement.","authors":"Xiqi Li,Zhifu Huang,Ge Ma,Yu Liu","doi":"10.1109/tnnls.2025.3604956","DOIUrl":null,"url":null,"abstract":"In industrial scenarios, semantic segmentation of surface defects is vital for identifying, localizing, and delineating defects. However, new defect types constantly emerge with product iterations or process updates. Existing defect segmentation models lack incremental learning capabilities, and direct fine-tuning (FT) often leads to catastrophic forgetting. Furthermore, low contrast between defects and background, as well as among defect classes, exacerbates this issue. To address these challenges, we introduce a plug-and-play Transformer-based semantic complement module (TSCM). With only a few added parameters, it injects global contextual features from multi-head self-attention into shallow convolutional neural network (CNN) feature maps, compensating for convolutional receptive-field limits and fusing global and local information for better segmentation. For incremental updates, we propose multi-scale spatial pooling distillation (MSPD), which uses pseudo-labeling and multi-scale pooling to preserve both short- and long-range spatial relations and provides smooth feature alignment between teacher and student. Additionally, we adopt an adaptive weight fusion (AWF) strategy with a dynamic threshold that assigns higher weights to parameters with larger updates, achieving an optimal balance between stability and plasticity. The experimental results on two industrial surface defect datasets demonstrate that our method outperforms existing approaches in various incremental segmentation scenarios.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"11 1","pages":""},"PeriodicalIF":8.9000,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tnnls.2025.3604956","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In industrial scenarios, semantic segmentation of surface defects is vital for identifying, localizing, and delineating defects. However, new defect types constantly emerge with product iterations or process updates. Existing defect segmentation models lack incremental learning capabilities, and direct fine-tuning (FT) often leads to catastrophic forgetting. Furthermore, low contrast between defects and background, as well as among defect classes, exacerbates this issue. To address these challenges, we introduce a plug-and-play Transformer-based semantic complement module (TSCM). With only a few added parameters, it injects global contextual features from multi-head self-attention into shallow convolutional neural network (CNN) feature maps, compensating for convolutional receptive-field limits and fusing global and local information for better segmentation. For incremental updates, we propose multi-scale spatial pooling distillation (MSPD), which uses pseudo-labeling and multi-scale pooling to preserve both short- and long-range spatial relations and provides smooth feature alignment between teacher and student. Additionally, we adopt an adaptive weight fusion (AWF) strategy with a dynamic threshold that assigns higher weights to parameters with larger updates, achieving an optimal balance between stability and plasticity. The experimental results on two industrial surface defect datasets demonstrate that our method outperforms existing approaches in various incremental segmentation scenarios.
基于高效互感器语义补充的增量学习缺陷分割。
在工业场景中,表面缺陷的语义分割对于识别、定位和描述缺陷至关重要。然而,随着产品迭代或过程更新,新的缺陷类型不断出现。现有的缺陷分割模型缺乏增量学习能力,并且直接微调(FT)经常导致灾难性的遗忘。此外,缺陷和背景之间以及缺陷类之间的低对比度加剧了这个问题。为了解决这些挑战,我们引入了一个即插即用的基于变压器的语义补充模块(TSCM)。该算法只需要添加几个参数,就可以将来自多头自关注的全局上下文特征注入到浅卷积神经网络(CNN)特征图中,补偿卷积接受域限制,融合全局和局部信息,从而实现更好的分割。对于增量更新,我们提出了多尺度空间池化蒸馏(MSPD),该方法使用伪标记和多尺度池化来保留短期和长期空间关系,并在教师和学生之间提供平滑的特征对齐。此外,我们采用了一种自适应权重融合(AWF)策略,该策略具有动态阈值,为更新较大的参数分配更高的权重,从而实现了稳定性和可塑性之间的最佳平衡。在两个工业表面缺陷数据集上的实验结果表明,我们的方法在各种增量分割场景下都优于现有的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE transactions on neural networks and learning systems
IEEE transactions on neural networks and learning systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
CiteScore
23.80
自引率
9.60%
发文量
2102
审稿时长
3-8 weeks
期刊介绍: The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信