A unified approach to medical image segmentation by leveraging mixed supervision and self and transfer learning (MIST)

IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL
Jianfei Liu , Sayantan Bhadra , Omid Shafaat , Pritam Mukherjee , Christopher Parnell , Ronald M. Summers
{"title":"A unified approach to medical image segmentation by leveraging mixed supervision and self and transfer learning (MIST)","authors":"Jianfei Liu ,&nbsp;Sayantan Bhadra ,&nbsp;Omid Shafaat ,&nbsp;Pritam Mukherjee ,&nbsp;Christopher Parnell ,&nbsp;Ronald M. Summers","doi":"10.1016/j.compmedimag.2025.102517","DOIUrl":null,"url":null,"abstract":"<div><div>Medical image segmentation is important for quantitative disease diagnosis and treatment but relies on accurate pixel-wise labels, which are costly, time-consuming, and require domain expertise. This work introduces MIST (MIxed supervision, Self, and Transfer learning) to reduce manual labeling in medical image segmentation. A small set of cases was manually annotated (“strong labels”), while the rest used automated, less accurate labels (“weak labels”). Both label types trained a dual-branch network with a shared encoder and two decoders. Self-training iteratively refined weak labels, and transfer learning reduced computational costs by freezing the encoder and fine-tuning the decoders. Applied to segmenting muscle, subcutaneous, and visceral adipose tissue, MIST used only 100 manually labeled slices from 20 CT scans to generate accurate labels for all slices of 102 internal scans, which were then used to train a 3D nnU-Net model. Using MIST to update weak labels significantly improved nnU-Net segmentation accuracy compared to training directly on strong and weak labels. Dice similarity coefficient (DSC) increased for muscle (89.2 ± 4.3% to 93.2 ± 2.1%), subcutaneous (75.1 ± 14.4% to 94.2 ± 2.8%), and visceral adipose tissue (66.6 ± 16.4% to 77.1 ± 19.0% ) on an internal dataset (<span><math><mrow><mi>p</mi><mo>&lt;</mo><mo>.</mo><mn>05</mn></mrow></math></span>). DSC improved for muscle (80.5 ± 6.9% to 86.6 ± 3.9%) and subcutaneous adipose tissue (61.8 ± 12.5% to 82.7 ± 11.1%) on an external dataset (<span><math><mrow><mi>p</mi><mo>&lt;</mo><mo>.</mo><mn>05</mn></mrow></math></span>). MIST reduced the annotation burden by 99%, enabling efficient, accurate pixel-wise labeling for medical image segmentation. Code is available at <span><span>https://github.com/rsummers11/NIH_CADLab_Body_Composition</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"122 ","pages":"Article 102517"},"PeriodicalIF":4.9000,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computerized Medical Imaging and Graphics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0895611125000266","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Medical image segmentation is important for quantitative disease diagnosis and treatment but relies on accurate pixel-wise labels, which are costly, time-consuming, and require domain expertise. This work introduces MIST (MIxed supervision, Self, and Transfer learning) to reduce manual labeling in medical image segmentation. A small set of cases was manually annotated (“strong labels”), while the rest used automated, less accurate labels (“weak labels”). Both label types trained a dual-branch network with a shared encoder and two decoders. Self-training iteratively refined weak labels, and transfer learning reduced computational costs by freezing the encoder and fine-tuning the decoders. Applied to segmenting muscle, subcutaneous, and visceral adipose tissue, MIST used only 100 manually labeled slices from 20 CT scans to generate accurate labels for all slices of 102 internal scans, which were then used to train a 3D nnU-Net model. Using MIST to update weak labels significantly improved nnU-Net segmentation accuracy compared to training directly on strong and weak labels. Dice similarity coefficient (DSC) increased for muscle (89.2 ± 4.3% to 93.2 ± 2.1%), subcutaneous (75.1 ± 14.4% to 94.2 ± 2.8%), and visceral adipose tissue (66.6 ± 16.4% to 77.1 ± 19.0% ) on an internal dataset (p<.05). DSC improved for muscle (80.5 ± 6.9% to 86.6 ± 3.9%) and subcutaneous adipose tissue (61.8 ± 12.5% to 82.7 ± 11.1%) on an external dataset (p<.05). MIST reduced the annotation burden by 99%, enabling efficient, accurate pixel-wise labeling for medical image segmentation. Code is available at https://github.com/rsummers11/NIH_CADLab_Body_Composition.
基于混合监督和自迁移学习的医学图像分割方法
医学图像分割对于定量疾病诊断和治疗很重要,但依赖于精确的像素标记,这是昂贵的,耗时的,并且需要领域的专业知识。这项工作引入了MIST(混合监督,自我和迁移学习)来减少医学图像分割中的人工标记。一小部分案例是手动标注的(“强标签”),而其余的使用自动化的,不太准确的标签(“弱标签”)。两种标签类型都训练了一个具有共享编码器和两个解码器的双分支网络。自训练迭代改进弱标签,迁移学习通过冻结编码器和微调解码器来减少计算成本。MIST用于分割肌肉、皮下和内脏脂肪组织,仅使用来自20个CT扫描的100个手动标记的切片,即可为102个内部扫描的所有切片生成准确的标签,然后用于训练3D nnU-Net模型。与直接在强标签和弱标签上训练相比,使用MIST更新弱标签显著提高了nnU-Net分割的准确性。在内部数据集上,肌肉(89.2±4.3%至93.2±2.1%)、皮下(75.1±14.4%至94.2±2.8%)和内脏脂肪组织(66.6±16.4%至77.1±19.0%)的骰子相似系数(DSC)增加(p< 0.05)。在外部数据集上,肌肉(80.5±6.9%至86.6±3.9%)和皮下脂肪组织(61.8±12.5%至82.7±11.1%)的DSC改善(p< 0.05)。MIST将注释负担减少了99%,为医学图像分割实现了高效、准确的逐像素标记。代码可从https://github.com/rsummers11/NIH_CADLab_Body_Composition获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
10.70
自引率
3.50%
发文量
71
审稿时长
26 days
期刊介绍: The purpose of the journal Computerized Medical Imaging and Graphics is to act as a source for the exchange of research results concerning algorithmic advances, development, and application of digital imaging in disease detection, diagnosis, intervention, prevention, precision medicine, and population health. Included in the journal will be articles on novel computerized imaging or visualization techniques, including artificial intelligence and machine learning, augmented reality for surgical planning and guidance, big biomedical data visualization, computer-aided diagnosis, computerized-robotic surgery, image-guided therapy, imaging scanning and reconstruction, mobile and tele-imaging, radiomics, and imaging integration and modeling with other information relevant to digital health. The types of biomedical imaging include: magnetic resonance, computed tomography, ultrasound, nuclear medicine, X-ray, microwave, optical and multi-photon microscopy, video and sensory imaging, and the convergence of biomedical images with other non-imaging datasets.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信