AUSAM:用于医学影像中多模态肿瘤分割和增强检测的自适应统一分割模型

IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Suraj Sood , Saeed Alqarni , Syed Jawad Hussain Shah, Yugyung Lee
{"title":"AUSAM:用于医学影像中多模态肿瘤分割和增强检测的自适应统一分割模型","authors":"Suraj Sood ,&nbsp;Saeed Alqarni ,&nbsp;Syed Jawad Hussain Shah,&nbsp;Yugyung Lee","doi":"10.1016/j.knosys.2025.113588","DOIUrl":null,"url":null,"abstract":"<div><div>Tumor segmentation in medical imaging is critical for diagnosis, treatment planning, and prognosis, yet remains challenging due to limited annotated data, tumor heterogeneity, and modality-specific complexities in CT, MRI, and histopathology. Although the <em>Segment Anything Model (SAM)</em> shows promise as a zero-shot learner, it struggles with irregular tumor boundaries and domain-specific variations. We introduce the <em>Adaptive Unified Segmentation Anything Model (AUSAM)</em>. This novel framework extends SAM’s capabilities for multi-modal tumor segmentation by integrating an intelligent prompt module, dynamic sampling, and stage-based thresholding. Specifically, clustering-based prompt learning (DBSCAN for CT/MRI and K-means for histopathology) adaptively allocates prompts to capture challenging tumor regions, while entropy-guided sampling and dynamic thresholding systematically reduce annotation requirements and computational overhead. Validated on diverse benchmarks—LiTS (CT), FLARE 2023 (CT/MRI), ORCA, and OCDC (histopathology)—AUSAM achieves state-of-the-art Dice Similarity Coefficients (DSC) of 94.25%, 91.84%, 87.59%, and 91.84%, respectively, with significantly reduced data usage. As the first framework to adapt SAM for multi-modal tumor segmentation, AUSAM sets a new standard for precision, scalability, and efficiency. It is offered in two variants: <em>AUSAM-Lite</em> for resource-constrained environments and <em>AUSAM-Max</em> for maximum segmentation accuracy, thereby advancing medical imaging and clinical decision-making.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"319 ","pages":"Article 113588"},"PeriodicalIF":7.2000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AUSAM: Adaptive Unified Segmentation Anything Model for multi-modality tumor segmentation and enhanced detection in medical imaging\",\"authors\":\"Suraj Sood ,&nbsp;Saeed Alqarni ,&nbsp;Syed Jawad Hussain Shah,&nbsp;Yugyung Lee\",\"doi\":\"10.1016/j.knosys.2025.113588\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Tumor segmentation in medical imaging is critical for diagnosis, treatment planning, and prognosis, yet remains challenging due to limited annotated data, tumor heterogeneity, and modality-specific complexities in CT, MRI, and histopathology. Although the <em>Segment Anything Model (SAM)</em> shows promise as a zero-shot learner, it struggles with irregular tumor boundaries and domain-specific variations. We introduce the <em>Adaptive Unified Segmentation Anything Model (AUSAM)</em>. This novel framework extends SAM’s capabilities for multi-modal tumor segmentation by integrating an intelligent prompt module, dynamic sampling, and stage-based thresholding. Specifically, clustering-based prompt learning (DBSCAN for CT/MRI and K-means for histopathology) adaptively allocates prompts to capture challenging tumor regions, while entropy-guided sampling and dynamic thresholding systematically reduce annotation requirements and computational overhead. Validated on diverse benchmarks—LiTS (CT), FLARE 2023 (CT/MRI), ORCA, and OCDC (histopathology)—AUSAM achieves state-of-the-art Dice Similarity Coefficients (DSC) of 94.25%, 91.84%, 87.59%, and 91.84%, respectively, with significantly reduced data usage. As the first framework to adapt SAM for multi-modal tumor segmentation, AUSAM sets a new standard for precision, scalability, and efficiency. It is offered in two variants: <em>AUSAM-Lite</em> for resource-constrained environments and <em>AUSAM-Max</em> for maximum segmentation accuracy, thereby advancing medical imaging and clinical decision-making.</div></div>\",\"PeriodicalId\":49939,\"journal\":{\"name\":\"Knowledge-Based Systems\",\"volume\":\"319 \",\"pages\":\"Article 113588\"},\"PeriodicalIF\":7.2000,\"publicationDate\":\"2025-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Knowledge-Based Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0950705125006343\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705125006343","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

医学影像中的肿瘤分割对诊断、治疗计划和预后至关重要,但由于CT、MRI和组织病理学中注释数据有限、肿瘤异质性和模式特异性复杂性,仍然具有挑战性。尽管分段任意模型(SAM)显示出作为零概率学习者的希望,但它在不规则肿瘤边界和领域特定变化方面存在困难。介绍了自适应统一分割模型(AUSAM)。这个新框架通过集成智能提示模块、动态采样和基于阶段的阈值,扩展了SAM的多模态肿瘤分割能力。具体来说,基于聚类的提示学习(CT/MRI的DBSCAN和组织病理学的K-means)自适应地分配提示以捕获具有挑战性的肿瘤区域,而熵引导的采样和动态阈值系统地减少了注释要求和计算开销。在不同的基准上进行验证- lits (CT), FLARE 2023 (CT/MRI), ORCA和OCDC(组织病理学)-AUSAM分别达到了最先进的骰子相似系数(DSC),分别为94.25%,91.84%,87.59%和91.84%,显著减少了数据使用。作为首个将SAM应用于多模态肿瘤分割的框架,AUSAM在精度、可扩展性和效率方面树立了新的标准。它提供两种变体:用于资源受限环境的AUSAM-Lite和用于最大分割精度的AUSAM-Max,从而推进医学成像和临床决策。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
AUSAM: Adaptive Unified Segmentation Anything Model for multi-modality tumor segmentation and enhanced detection in medical imaging
Tumor segmentation in medical imaging is critical for diagnosis, treatment planning, and prognosis, yet remains challenging due to limited annotated data, tumor heterogeneity, and modality-specific complexities in CT, MRI, and histopathology. Although the Segment Anything Model (SAM) shows promise as a zero-shot learner, it struggles with irregular tumor boundaries and domain-specific variations. We introduce the Adaptive Unified Segmentation Anything Model (AUSAM). This novel framework extends SAM’s capabilities for multi-modal tumor segmentation by integrating an intelligent prompt module, dynamic sampling, and stage-based thresholding. Specifically, clustering-based prompt learning (DBSCAN for CT/MRI and K-means for histopathology) adaptively allocates prompts to capture challenging tumor regions, while entropy-guided sampling and dynamic thresholding systematically reduce annotation requirements and computational overhead. Validated on diverse benchmarks—LiTS (CT), FLARE 2023 (CT/MRI), ORCA, and OCDC (histopathology)—AUSAM achieves state-of-the-art Dice Similarity Coefficients (DSC) of 94.25%, 91.84%, 87.59%, and 91.84%, respectively, with significantly reduced data usage. As the first framework to adapt SAM for multi-modal tumor segmentation, AUSAM sets a new standard for precision, scalability, and efficiency. It is offered in two variants: AUSAM-Lite for resource-constrained environments and AUSAM-Max for maximum segmentation accuracy, thereby advancing medical imaging and clinical decision-making.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Knowledge-Based Systems
Knowledge-Based Systems 工程技术-计算机:人工智能
CiteScore
14.80
自引率
12.50%
发文量
1245
审稿时长
7.8 months
期刊介绍: Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信