MODD: Multicenter one-shot medical landmark detection and denoising.

IF 3.4 3区 医学 Q2 ENGINEERING, BIOMEDICAL
Jialin Shi, Xiangde Li, Ning Zhang, Zongjie Wang
{"title":"MODD: Multicenter one-shot medical landmark detection and denoising.","authors":"Jialin Shi, Xiangde Li, Ning Zhang, Zongjie Wang","doi":"10.1088/1361-6560/ae0d40","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>One-shot landmark detection on lateral cephalometric X-ray images has advantages for label-efficient training. As clinical datasets are typically small and do not generalize well to variations in equipment or imaging environments. When using datasets from multiple studies (multi-center data) for joint training, the imbalance in the number of images and the differences in image properties can lead to poor generalization performance. In this work, we aim to propose a method that leverages more data from multiple experiments to improve the accuracy of a single experiment.&#xD;Approach. To address these challenges, we propose a Multicenter One-shot landmark Detection and Denoising framework (MODD). It incorporates a self-supervised one-shot mapping based on multicenter template transformation and the pseudo-label denoising module. Label denoising is used to reduce the impact of inaccurate pseudo-labels on the algorithm, focusing on label quality rather than signal noise in X-ray images.&#xD;For denoising module, we propose the shuffled dynamic sample selection and contrastive correction of multicenter pseudo labels. These two components together enable more accurate one-shot landmarks detection on lateral cephalometric X-ray images in multicenter scenarios.&#xD;Main results. Experiments are conducted with the publicly available multicenter cephalometric X-ray datasets. MODD achieves a landmark detection accuracy of 79.27\\% within a 4.0 mm range and the mean radial error of 2.94 mm, demonstrating satisfactory performance compared to state-of-the-art methods. &#xD;Significance. This study expands the application of medical landmark detection to the multi-center one-shot filed and demonstrates the potential of the MODD architecture.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Physics in medicine and biology","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1088/1361-6560/ae0d40","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Objective: One-shot landmark detection on lateral cephalometric X-ray images has advantages for label-efficient training. As clinical datasets are typically small and do not generalize well to variations in equipment or imaging environments. When using datasets from multiple studies (multi-center data) for joint training, the imbalance in the number of images and the differences in image properties can lead to poor generalization performance. In this work, we aim to propose a method that leverages more data from multiple experiments to improve the accuracy of a single experiment. Approach. To address these challenges, we propose a Multicenter One-shot landmark Detection and Denoising framework (MODD). It incorporates a self-supervised one-shot mapping based on multicenter template transformation and the pseudo-label denoising module. Label denoising is used to reduce the impact of inaccurate pseudo-labels on the algorithm, focusing on label quality rather than signal noise in X-ray images. For denoising module, we propose the shuffled dynamic sample selection and contrastive correction of multicenter pseudo labels. These two components together enable more accurate one-shot landmarks detection on lateral cephalometric X-ray images in multicenter scenarios. Main results. Experiments are conducted with the publicly available multicenter cephalometric X-ray datasets. MODD achieves a landmark detection accuracy of 79.27\% within a 4.0 mm range and the mean radial error of 2.94 mm, demonstrating satisfactory performance compared to state-of-the-art methods. Significance. This study expands the application of medical landmark detection to the multi-center one-shot filed and demonstrates the potential of the MODD architecture.

MODD:多中心一次性医学地标检测与去噪。
目的:对侧位头颅x线图像进行一次标记检测具有标记高效训练的优势。由于临床数据集通常很小,不能很好地概括设备或成像环境的变化。当使用多个研究的数据集(多中心数据)进行联合训练时,图像数量的不平衡和图像属性的差异会导致泛化性能差。在这项工作中,我们的目标是提出一种利用来自多个实验的更多数据来提高单个实验准确性的方法。为了解决这些挑战,我们提出了一个多中心单镜头地标检测和去噪框架(MODD)。它结合了基于多中心模板变换的自监督一次性映射和伪标签去噪模块。标签去噪是为了减少伪标签不准确对算法的影响,重点关注x射线图像中的标签质量而不是信号噪声。 ;对于去噪模块,我们提出了多中心伪标签的洗牌动态样本选择和对比校正。这两个组件一起可以在多中心场景下对侧位头x线图像进行更准确的一次地标检测。实验是用公开的多中心头颅x射线测量数据集进行的。MODD在4.0 mm范围内实现了79.27%的地标检测精度,平均径向误差为2.94 mm,与最先进的方法相比,表现出令人满意的性能。& # xD;意义。本研究将医学地标检测的应用扩展到多中心单镜头领域,展示了MODD架构的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Physics in medicine and biology
Physics in medicine and biology 医学-工程:生物医学
CiteScore
6.50
自引率
14.30%
发文量
409
审稿时长
2 months
期刊介绍: The development and application of theoretical, computational and experimental physics to medicine, physiology and biology. Topics covered are: therapy physics (including ionizing and non-ionizing radiation); biomedical imaging (e.g. x-ray, magnetic resonance, ultrasound, optical and nuclear imaging); image-guided interventions; image reconstruction and analysis (including kinetic modelling); artificial intelligence in biomedical physics and analysis; nanoparticles in imaging and therapy; radiobiology; radiation protection and patient dose monitoring; radiation dosimetry
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信