Streamlining the annotation process by radiologists of volumetric medical images with few-shot learning.

IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL
Alina Ryabtsev, Richard Lederman, Jacob Sosna, Leo Joskowicz
{"title":"Streamlining the annotation process by radiologists of volumetric medical images with few-shot learning.","authors":"Alina Ryabtsev, Richard Lederman, Jacob Sosna, Leo Joskowicz","doi":"10.1007/s11548-025-03457-3","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Radiologist's manual annotations limit robust deep learning in volumetric medical imaging. While supervised methods excel with large annotated datasets, few-shot learning performs well for large structures but struggles with small ones, such as lesions. This paper describes a novel method that leverages the advantages of both few-shot learning models and fully supervised models while reducing the cost of manual annotation.</p><p><strong>Methods: </strong>Our method inputs a small dataset of labeled scans and a large dataset of unlabeled scans and outputs a validated labeled dataset used to train a supervised model (nnU-Net). The estimated correction effort is reduced by having the radiologist correct a subset of the scan labels computed by a few-shot learning model (UniverSeg). The method uses an optimized support set of scan slice patches and prioritizes the resulting labeled scans that require the least correction. This process is repeated for the remaining unannotated scans until satisfactory performance is obtained.</p><p><strong>Results: </strong>We validated our method on liver, lung, and brain lesions on CT and MRI scans (375 scans, 5933 lesions). It significantly reduces the estimated lesion detection correction effort by 34% for missed lesions, 387% for wrongly identified lesions, with 130% fewer lesion contour corrections, and 424% fewer pixels to correct in the lesion contours with respect to manual annotation from scratch.</p><p><strong>Conclusion: </strong>Our method effectively reduces the radiologist's annotation effort of small structures to produce sufficient high-quality annotated datasets to train deep learning models. The method is generic and can be applied to a variety of lesions in various organs imaged by different modalities.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1863-1873"},"PeriodicalIF":2.3000,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12476431/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computer Assisted Radiology and Surgery","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s11548-025-03457-3","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/6/25 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose: Radiologist's manual annotations limit robust deep learning in volumetric medical imaging. While supervised methods excel with large annotated datasets, few-shot learning performs well for large structures but struggles with small ones, such as lesions. This paper describes a novel method that leverages the advantages of both few-shot learning models and fully supervised models while reducing the cost of manual annotation.

Methods: Our method inputs a small dataset of labeled scans and a large dataset of unlabeled scans and outputs a validated labeled dataset used to train a supervised model (nnU-Net). The estimated correction effort is reduced by having the radiologist correct a subset of the scan labels computed by a few-shot learning model (UniverSeg). The method uses an optimized support set of scan slice patches and prioritizes the resulting labeled scans that require the least correction. This process is repeated for the remaining unannotated scans until satisfactory performance is obtained.

Results: We validated our method on liver, lung, and brain lesions on CT and MRI scans (375 scans, 5933 lesions). It significantly reduces the estimated lesion detection correction effort by 34% for missed lesions, 387% for wrongly identified lesions, with 130% fewer lesion contour corrections, and 424% fewer pixels to correct in the lesion contours with respect to manual annotation from scratch.

Conclusion: Our method effectively reduces the radiologist's annotation effort of small structures to produce sufficient high-quality annotated datasets to train deep learning models. The method is generic and can be applied to a variety of lesions in various organs imaged by different modalities.

Abstract Image

Abstract Image

Abstract Image

简化放射科医师对体积医学图像的注释过程。
目的:放射科医生的手动注释限制了体积医学成像中稳健的深度学习。虽然监督方法在大型带注释的数据集上表现出色,但少射学习在大型结构上表现良好,但在小结构(如病变)上表现不佳。本文描述了一种利用少镜头学习模型和全监督模型的优点,同时降低人工标注成本的新方法。方法:我们的方法输入一个小的标记扫描数据集和一个大的未标记扫描数据集,并输出一个经过验证的标记数据集,用于训练监督模型(nnU-Net)。通过让放射科医生纠正由几次学习模型(UniverSeg)计算的扫描标签子集,减少了估计的校正工作量。该方法使用扫描切片补丁的优化支持集,并对需要最少校正的结果标记扫描进行优先排序。对其余未注释的扫描重复此过程,直到获得令人满意的性能。结果:我们通过CT和MRI扫描(375次扫描,5933个病变)验证了我们的方法在肝、肺和脑病变上的有效性。与从头开始手工标注相比,它显著减少了34%的遗漏病灶的估计病灶检测校正工作量,减少了387%的错误识别病灶,减少了130%的病灶轮廓校正,减少了424%的病灶轮廓校正像素。结论:我们的方法有效地减少了放射科医生对小结构的标注工作量,从而产生足够高质量的标注数据集来训练深度学习模型。该方法是通用的,可以应用于不同方式成像的各种器官的各种病变。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal of Computer Assisted Radiology and Surgery
International Journal of Computer Assisted Radiology and Surgery ENGINEERING, BIOMEDICAL-RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
CiteScore
5.90
自引率
6.70%
发文量
243
审稿时长
6-12 weeks
期刊介绍: The International Journal for Computer Assisted Radiology and Surgery (IJCARS) is a peer-reviewed journal that provides a platform for closing the gap between medical and technical disciplines, and encourages interdisciplinary research and development activities in an international environment.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信