联合国核监测团:通用核图像的自适应自提示分割

IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Zhen Chen , Qing Xu , Xinyu Liu , Yixuan Yuan
{"title":"联合国核监测团:通用核图像的自适应自提示分割","authors":"Zhen Chen ,&nbsp;Qing Xu ,&nbsp;Xinyu Liu ,&nbsp;Yixuan Yuan","doi":"10.1016/j.media.2025.103607","DOIUrl":null,"url":null,"abstract":"<div><div>In digital pathology, precise nuclei segmentation is pivotal yet challenged by the diversity of tissue types, staining protocols, and imaging conditions. Recently, the segment anything model (SAM) revealed overwhelming performance in natural scenarios and impressive adaptation to medical imaging. Despite these advantages, the reliance on labor-intensive manual annotation as segmentation prompts severely hinders their clinical applicability, especially for nuclei image analysis containing massive cells where dense manual prompts are impractical. To overcome the limitations of current SAM methods while retaining the advantages, we propose the domain-adaptive self-prompt SAM framework for Universal Nuclei segmentation (UN-SAM), by providing a fully automated solution with superior performance across different domains. Specifically, to eliminate the labor-intensive requirement of per-nuclei annotations for prompt, we devise a multi-scale Self-Prompt Generation (SPGen) module to revolutionize clinical workflow by automatically generating high-quality mask hints to guide the segmentation tasks. Moreover, to unleash the capability of SAM across a variety of nuclei images, we devise a Domain-adaptive Tuning Encoder (DT-Encoder) to seamlessly harmonize visual features with domain-common and domain-specific knowledge, and further devise a Domain Query-enhanced Decoder (DQ-Decoder) by leveraging learnable domain queries for segmentation decoding in different nuclei domains. Extensive experiments prove that our UN-SAM surpasses state-of-the-arts in nuclei instance and semantic segmentation, especially the generalization capability on unseen nuclei domains. The source code is available at <span><span>https://github.com/CUHK-AIM-Group/UN-SAM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"103 ","pages":"Article 103607"},"PeriodicalIF":10.7000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"UN-SAM: Domain-adaptive self-prompt segmentation for universal nuclei images\",\"authors\":\"Zhen Chen ,&nbsp;Qing Xu ,&nbsp;Xinyu Liu ,&nbsp;Yixuan Yuan\",\"doi\":\"10.1016/j.media.2025.103607\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In digital pathology, precise nuclei segmentation is pivotal yet challenged by the diversity of tissue types, staining protocols, and imaging conditions. Recently, the segment anything model (SAM) revealed overwhelming performance in natural scenarios and impressive adaptation to medical imaging. Despite these advantages, the reliance on labor-intensive manual annotation as segmentation prompts severely hinders their clinical applicability, especially for nuclei image analysis containing massive cells where dense manual prompts are impractical. To overcome the limitations of current SAM methods while retaining the advantages, we propose the domain-adaptive self-prompt SAM framework for Universal Nuclei segmentation (UN-SAM), by providing a fully automated solution with superior performance across different domains. Specifically, to eliminate the labor-intensive requirement of per-nuclei annotations for prompt, we devise a multi-scale Self-Prompt Generation (SPGen) module to revolutionize clinical workflow by automatically generating high-quality mask hints to guide the segmentation tasks. Moreover, to unleash the capability of SAM across a variety of nuclei images, we devise a Domain-adaptive Tuning Encoder (DT-Encoder) to seamlessly harmonize visual features with domain-common and domain-specific knowledge, and further devise a Domain Query-enhanced Decoder (DQ-Decoder) by leveraging learnable domain queries for segmentation decoding in different nuclei domains. Extensive experiments prove that our UN-SAM surpasses state-of-the-arts in nuclei instance and semantic segmentation, especially the generalization capability on unseen nuclei domains. The source code is available at <span><span>https://github.com/CUHK-AIM-Group/UN-SAM</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":18328,\"journal\":{\"name\":\"Medical image analysis\",\"volume\":\"103 \",\"pages\":\"Article 103607\"},\"PeriodicalIF\":10.7000,\"publicationDate\":\"2025-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medical image analysis\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1361841525001549\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1361841525001549","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

在数字病理学中,精确的细胞核分割是关键的,但受到组织类型、染色方案和成像条件的多样性的挑战。最近,分段任意模型(SAM)在自然场景中表现出压倒性的性能,并对医学成像具有令人印象深刻的适应性。尽管有这些优点,但依赖劳动密集型的手工注释作为分割提示严重阻碍了其临床适用性,特别是对于包含大量细胞的核图像分析,密集的手工提示是不切实际的。为了克服现有SAM方法的局限性,同时保留其优点,我们提出了一种适用于通用核分割(UN-SAM)的领域自适应自提示SAM框架,通过在不同领域提供具有优异性能的全自动化解决方案。具体来说,为了消除每核注释对提示的劳动密集型需求,我们设计了一个多尺度自提示生成(SPGen)模块,通过自动生成高质量的mask提示来指导分割任务,从而彻底改变临床工作流程。此外,为了释放SAM在各种核图像上的能力,我们设计了一个领域自适应调谐编码器(DT-Encoder)来无缝协调视觉特征与领域公共和领域特定知识,并进一步设计了一个领域查询增强解码器(DQ-Decoder),利用可学习的领域查询在不同核域中进行分割解码。大量的实验证明,我们的UN-SAM在核实例和语义分割方面都超越了目前的技术水平,特别是在看不见的核域上的泛化能力。源代码可从https://github.com/CUHK-AIM-Group/UN-SAM获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
UN-SAM: Domain-adaptive self-prompt segmentation for universal nuclei images
In digital pathology, precise nuclei segmentation is pivotal yet challenged by the diversity of tissue types, staining protocols, and imaging conditions. Recently, the segment anything model (SAM) revealed overwhelming performance in natural scenarios and impressive adaptation to medical imaging. Despite these advantages, the reliance on labor-intensive manual annotation as segmentation prompts severely hinders their clinical applicability, especially for nuclei image analysis containing massive cells where dense manual prompts are impractical. To overcome the limitations of current SAM methods while retaining the advantages, we propose the domain-adaptive self-prompt SAM framework for Universal Nuclei segmentation (UN-SAM), by providing a fully automated solution with superior performance across different domains. Specifically, to eliminate the labor-intensive requirement of per-nuclei annotations for prompt, we devise a multi-scale Self-Prompt Generation (SPGen) module to revolutionize clinical workflow by automatically generating high-quality mask hints to guide the segmentation tasks. Moreover, to unleash the capability of SAM across a variety of nuclei images, we devise a Domain-adaptive Tuning Encoder (DT-Encoder) to seamlessly harmonize visual features with domain-common and domain-specific knowledge, and further devise a Domain Query-enhanced Decoder (DQ-Decoder) by leveraging learnable domain queries for segmentation decoding in different nuclei domains. Extensive experiments prove that our UN-SAM surpasses state-of-the-arts in nuclei instance and semantic segmentation, especially the generalization capability on unseen nuclei domains. The source code is available at https://github.com/CUHK-AIM-Group/UN-SAM.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Medical image analysis
Medical image analysis 工程技术-工程:生物医学
CiteScore
22.10
自引率
6.40%
发文量
309
审稿时长
6.6 months
期刊介绍: Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信