MoE-NuSeg: Enhancing nuclei segmentation in histology images with a two-stage Mixture of Experts network

IF 6.2 2区 工程技术 Q1 ENGINEERING, MULTIDISCIPLINARY
{"title":"MoE-NuSeg: Enhancing nuclei segmentation in histology images with a two-stage Mixture of Experts network","authors":"","doi":"10.1016/j.aej.2024.10.011","DOIUrl":null,"url":null,"abstract":"<div><div>Accurate nuclei segmentation is essential for extracting quantitative information from histology images to support disease diagnosis and treatment decisions. However, precise segmentation is challenging due to the presence of clustered nuclei, varied morphologies, and the need to capture global spatial correlations. While state-of-the-art Transformer-based models employ tri-decoder architectures to decouple the segmentation task into nuclei, edges, and cluster edges segmentation, their complexity and long inference times hinder clinical integration. To address this, we introduce MoE-NuSeg, a novel Mixture of Experts (MoE) network that consolidates the tri-decoder into a single decoder. MoE-NuSeg employs three specialized experts for nuclei segmentation, edge delineation, and cluster edge detection, thereby mirroring the functionality of tri-decoders while surpassing their performance and reducing parameters by sharing attention heads. We propose a two-stage training strategy: the first stage independently trains the three experts, and the second stage fine-tunes their interactions to dynamically allocate the contributions of each expert using a learnable attention-based gating network. Evaluations across three datasets demonstrate that MoE-NuSeg outperforms the state-of-the-art methods, achieving an average increase of 0.99% in Dice coefficient, 1.14% in IoU and 0.92% in F1 Score, while reducing parameters by 30.1% and FLOPs by 40.2%. The code is available at <span><span>https://github.com/deep-geo/MoE-NuSeg</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":7484,"journal":{"name":"alexandria engineering journal","volume":null,"pages":null},"PeriodicalIF":6.2000,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"alexandria engineering journal","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1110016824011669","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

Accurate nuclei segmentation is essential for extracting quantitative information from histology images to support disease diagnosis and treatment decisions. However, precise segmentation is challenging due to the presence of clustered nuclei, varied morphologies, and the need to capture global spatial correlations. While state-of-the-art Transformer-based models employ tri-decoder architectures to decouple the segmentation task into nuclei, edges, and cluster edges segmentation, their complexity and long inference times hinder clinical integration. To address this, we introduce MoE-NuSeg, a novel Mixture of Experts (MoE) network that consolidates the tri-decoder into a single decoder. MoE-NuSeg employs three specialized experts for nuclei segmentation, edge delineation, and cluster edge detection, thereby mirroring the functionality of tri-decoders while surpassing their performance and reducing parameters by sharing attention heads. We propose a two-stage training strategy: the first stage independently trains the three experts, and the second stage fine-tunes their interactions to dynamically allocate the contributions of each expert using a learnable attention-based gating network. Evaluations across three datasets demonstrate that MoE-NuSeg outperforms the state-of-the-art methods, achieving an average increase of 0.99% in Dice coefficient, 1.14% in IoU and 0.92% in F1 Score, while reducing parameters by 30.1% and FLOPs by 40.2%. The code is available at https://github.com/deep-geo/MoE-NuSeg.
MoE-NuSeg:利用两级专家混合网络增强组织学图像中的细胞核分割功能
准确的细胞核分割对于从组织学图像中提取定量信息以支持疾病诊断和治疗决策至关重要。然而,由于核团的存在、形态的变化以及捕捉全局空间相关性的需要,精确分割具有挑战性。虽然最先进的基于 Transformer 的模型采用三解码器架构将分割任务解耦为细胞核、边缘和集群边缘分割,但其复杂性和较长的推理时间阻碍了临床整合。为了解决这个问题,我们引入了 MoE-NuSeg,这是一种新颖的专家混合(MoE)网络,它将三解码器合并为单个解码器。MoE-NuSeg 采用三个专门的专家来进行核分割、边缘划分和集群边缘检测,从而反映了三解码器的功能,同时通过共享注意头超越了它们的性能并减少了参数。我们提出了一种两阶段训练策略:第一阶段独立训练三位专家,第二阶段微调他们之间的互动,利用基于注意力的可学习门控网络动态分配每位专家的贡献。三个数据集的评估结果表明,MoE-NuSeg 的性能优于最先进的方法,Dice系数平均提高了 0.99%,IoU 平均提高了 1.14%,F1 分数平均提高了 0.92%,同时参数减少了 30.1%,FLOPs 减少了 40.2%。代码见 https://github.com/deep-geo/MoE-NuSeg。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
alexandria engineering journal
alexandria engineering journal Engineering-General Engineering
CiteScore
11.20
自引率
4.40%
发文量
1015
审稿时长
43 days
期刊介绍: Alexandria Engineering Journal is an international journal devoted to publishing high quality papers in the field of engineering and applied science. Alexandria Engineering Journal is cited in the Engineering Information Services (EIS) and the Chemical Abstracts (CA). The papers published in Alexandria Engineering Journal are grouped into five sections, according to the following classification: • Mechanical, Production, Marine and Textile Engineering • Electrical Engineering, Computer Science and Nuclear Engineering • Civil and Architecture Engineering • Chemical Engineering and Applied Sciences • Environmental Engineering
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信