医学图像分割中硬训练样本生成的再思考

IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Zhibin Wan , Zhiqiang Gao , Mingjie Sun , Yang Yang , Cao Min , Hongliang He , Guohong Fu
{"title":"医学图像分割中硬训练样本生成的再思考","authors":"Zhibin Wan ,&nbsp;Zhiqiang Gao ,&nbsp;Mingjie Sun ,&nbsp;Yang Yang ,&nbsp;Cao Min ,&nbsp;Hongliang He ,&nbsp;Guohong Fu","doi":"10.1016/j.patcog.2025.112533","DOIUrl":null,"url":null,"abstract":"<div><div>This paper tackles the task of synthetic data generation for downstream segmentation tasks, especially in data-scarce fields like medical diagnostics. Previous methods address the challenge of similar synthetic samples leading to model saturation by leveraging the specific downstream model to guide the generation process, and dynamically adjusting sample difficulty to prevent downstream performance plateaus. However, such an approach never considers the interoperability of these synthetic samples, which may not be universally challenging due to varying feature focuses across different downstream models. Thus, we propose a strategy that uses the discrepancy between backbone-extracted features and real image prototypes to generate challenging samples, employing two loss functions: one for key-area diversity and another for overall image fidelity. This ensures key areas are challenging while the background remains stable, creating samples that are broadly applicable for downstream tasks without overfitting to specific models. Our method, leveraging the data generated by our approach for model training, achieves an average mean Intersection over Union (mIoU) of 86.84% across five polyp test datasets, surpassing the state-of-the-art (SOTA) model CTNet [1] by a significant margin of 6.14%. Code is available at <span><span>https://github.com/Bbinzz/Rethinking-Hard-Training-Sample-Generation-for-Medical-Image-Segmentation</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"172 ","pages":"Article 112533"},"PeriodicalIF":7.6000,"publicationDate":"2025-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Rethinking hard training sample generation for medical image segmentation\",\"authors\":\"Zhibin Wan ,&nbsp;Zhiqiang Gao ,&nbsp;Mingjie Sun ,&nbsp;Yang Yang ,&nbsp;Cao Min ,&nbsp;Hongliang He ,&nbsp;Guohong Fu\",\"doi\":\"10.1016/j.patcog.2025.112533\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>This paper tackles the task of synthetic data generation for downstream segmentation tasks, especially in data-scarce fields like medical diagnostics. Previous methods address the challenge of similar synthetic samples leading to model saturation by leveraging the specific downstream model to guide the generation process, and dynamically adjusting sample difficulty to prevent downstream performance plateaus. However, such an approach never considers the interoperability of these synthetic samples, which may not be universally challenging due to varying feature focuses across different downstream models. Thus, we propose a strategy that uses the discrepancy between backbone-extracted features and real image prototypes to generate challenging samples, employing two loss functions: one for key-area diversity and another for overall image fidelity. This ensures key areas are challenging while the background remains stable, creating samples that are broadly applicable for downstream tasks without overfitting to specific models. Our method, leveraging the data generated by our approach for model training, achieves an average mean Intersection over Union (mIoU) of 86.84% across five polyp test datasets, surpassing the state-of-the-art (SOTA) model CTNet [1] by a significant margin of 6.14%. Code is available at <span><span>https://github.com/Bbinzz/Rethinking-Hard-Training-Sample-Generation-for-Medical-Image-Segmentation</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":49713,\"journal\":{\"name\":\"Pattern Recognition\",\"volume\":\"172 \",\"pages\":\"Article 112533\"},\"PeriodicalIF\":7.6000,\"publicationDate\":\"2025-10-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Pattern Recognition\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0031320325011963\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325011963","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

本文主要解决下游分割任务中合成数据生成的问题,特别是在医疗诊断等数据稀缺领域。以前的方法利用特定的下游模型来指导生成过程,并动态调整样本难度以防止下游性能停滞,从而解决了类似合成样本导致模型饱和的挑战。然而,这种方法从未考虑这些合成样本的互操作性,由于不同下游模型的特征焦点不同,这可能不是普遍具有挑战性的。因此,我们提出了一种策略,利用骨干提取特征与真实图像原型之间的差异来生成具有挑战性的样本,采用两个损失函数:一个用于关键区域多样性,另一个用于整体图像保真度。这确保了关键区域具有挑战性,同时背景保持稳定,创建的样本广泛适用于下游任务,而不会过度拟合到特定模型。我们的方法利用我们的方法生成的数据进行模型训练,在五个息肉测试数据集上实现了86.84%的平均交叉交叉(mIoU),比最先进的(SOTA)模型CTNet[1]高出6.14%。代码可从https://github.com/Bbinzz/Rethinking-Hard-Training-Sample-Generation-for-Medical-Image-Segmentation获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Rethinking hard training sample generation for medical image segmentation
This paper tackles the task of synthetic data generation for downstream segmentation tasks, especially in data-scarce fields like medical diagnostics. Previous methods address the challenge of similar synthetic samples leading to model saturation by leveraging the specific downstream model to guide the generation process, and dynamically adjusting sample difficulty to prevent downstream performance plateaus. However, such an approach never considers the interoperability of these synthetic samples, which may not be universally challenging due to varying feature focuses across different downstream models. Thus, we propose a strategy that uses the discrepancy between backbone-extracted features and real image prototypes to generate challenging samples, employing two loss functions: one for key-area diversity and another for overall image fidelity. This ensures key areas are challenging while the background remains stable, creating samples that are broadly applicable for downstream tasks without overfitting to specific models. Our method, leveraging the data generated by our approach for model training, achieves an average mean Intersection over Union (mIoU) of 86.84% across five polyp test datasets, surpassing the state-of-the-art (SOTA) model CTNet [1] by a significant margin of 6.14%. Code is available at https://github.com/Bbinzz/Rethinking-Hard-Training-Sample-Generation-for-Medical-Image-Segmentation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Pattern Recognition
Pattern Recognition 工程技术-工程:电子与电气
CiteScore
14.40
自引率
16.20%
发文量
683
审稿时长
5.6 months
期刊介绍: The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信