Medical image analysis最新文献

筛选
英文 中文
Biomechanical modeling combined with pressure-volume loop analysis to aid surgical planning in patients with complex congenital heart disease 生物力学模型结合压力-体积环分析帮助复杂先天性心脏病患者的手术计划
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-12-15 DOI: 10.1016/j.media.2024.103441
Maria Gusseva , Nikhil Thatte , Daniel A. Castellanos , Peter E. Hammer , Sunil J. Ghelani , Ryan Callahan , Tarique Hussain , Radomír Chabiniok
{"title":"Biomechanical modeling combined with pressure-volume loop analysis to aid surgical planning in patients with complex congenital heart disease","authors":"Maria Gusseva ,&nbsp;Nikhil Thatte ,&nbsp;Daniel A. Castellanos ,&nbsp;Peter E. Hammer ,&nbsp;Sunil J. Ghelani ,&nbsp;Ryan Callahan ,&nbsp;Tarique Hussain ,&nbsp;Radomír Chabiniok","doi":"10.1016/j.media.2024.103441","DOIUrl":"10.1016/j.media.2024.103441","url":null,"abstract":"<div><div>Patients with congenitally corrected transposition of the great arteries (ccTGA) can be treated with a double switch operation (DSO) to restore the normal anatomical connection of the left ventricle (LV) to the systemic circulation and the right ventricle (RV) to the pulmonary circulation. The subpulmonary LV progressively deconditions over time due to its connection to the low pressure pulmonary circulation and needs to be retrained using a surgical pulmonary artery band (PAB) for 6–12 months prior to the DSO. The subsequent clinical follow-up, consisting of invasive cardiac pressure and non-invasive imaging data, evaluates LV preparedness for the DSO. Evaluation using standard clinical techniques has led to unacceptable LV failure rates of ∼15 % after DSO. We propose a computational modeling framework to (1) reconstruct LV and RV pressure-volume (PV) loops from non-simultaneously acquired imaging and pressure data and gather model-derived mechanical indicators of ventricular function; and (2) perform <em>in silico</em> DSO to predict the functional response of the LV when connected to the high-pressure systemic circulation.</div><div>Clinical datasets of six patients with ccTGA after PAB, consisting of cardiac magnetic resonance imaging (MRI) and right and left heart catheterization, were used to build patient-specific models of LV and RV – <span><math><msubsup><mi>M</mi><mrow><mtext>baseline</mtext></mrow><mtext>LV</mtext></msubsup></math></span> and <span><math><msubsup><mi>M</mi><mrow><mtext>baseline</mtext></mrow><mtext>RV</mtext></msubsup></math></span>. For <em>in silico</em> DSO the models of <span><math><msubsup><mi>M</mi><mrow><mtext>baseline</mtext></mrow><mtext>LV</mtext></msubsup></math></span> and <span><math><msubsup><mi>M</mi><mrow><mtext>baseline</mtext></mrow><mtext>RV</mtext></msubsup></math></span> were used while imposing the afterload of systemic and pulmonary circulations, respectively. Model-derived contractility and Pressure-Volume Area (PVA) – i.e., the sum of stroke work and potential energy – were computed for both ventricles at baseline and after <em>in silico</em> DSO.</div><div><em>In silico</em> DSO suggests that three patients would require a substantial augmentation of LV contractility between 54 % and 80 % and an increase in PVA between 38 % and 79 % with respect to the baseline values to accommodate the increased afterload of the systemic circulation. On the contrary, the baseline functional state of the remaining three patients is predicted to be adequate to sustain cardiac output after the DSO.</div><div>This work demonstrates the vast variation of LV function among patients with ccTGA and emphasizes the importance of a biventricular approach to assess patients’ readiness for DSO. Model-derived predictions have the potential to provide additional insights into planning of complex surgical interventions.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103441"},"PeriodicalIF":10.7,"publicationDate":"2024-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142874009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Organ-level instance segmentation enables continuous time-space-spectrum analysis of pre-clinical abdominal photoacoustic tomography images 器官水平的实例分割使临床前腹部光声断层扫描图像的连续时-空频谱分析成为可能。
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-12-12 DOI: 10.1016/j.media.2024.103402
Zhichao Liang, Shuangyang Zhang, Zongxin Mo, Xiaoming Zhang, Anqi Wei, Wufan Chen, Li Qi
{"title":"Organ-level instance segmentation enables continuous time-space-spectrum analysis of pre-clinical abdominal photoacoustic tomography images","authors":"Zhichao Liang,&nbsp;Shuangyang Zhang,&nbsp;Zongxin Mo,&nbsp;Xiaoming Zhang,&nbsp;Anqi Wei,&nbsp;Wufan Chen,&nbsp;Li Qi","doi":"10.1016/j.media.2024.103402","DOIUrl":"10.1016/j.media.2024.103402","url":null,"abstract":"<div><div>Photoacoustic tomography (PAT), as a novel biomedical imaging technique, is able to capture temporal, spatial and spectral tomographic information from organisms. Organ-level multi-parametric analysis of continuous PAT images are of interest since it enables the quantification of organ specific morphological and functional parameters in small animals. Accurate organ delineation is imperative for organ-level image analysis, yet the low contrast and blurred organ boundaries in PAT images pose challenge for their precise segmentation. Fortunately, shared structural information among continuous images in the time-space-spectrum domain may be used to enhance segmentation. In this paper, we introduce a structure fusion enhanced graph convolutional network (SFE-GCN), which aims at automatically segmenting major organs including the body, liver, kidneys, spleen, vessel and spine of abdominal PAT image of mice. SFE-GCN enhances the structural feature of organs by fusing information in continuous image sequence captured at time, space and spectrum domains. As validated on large-scale datasets across different imaging scenarios, our method not only preserves fine structural details but also ensures anatomically aligned organ contours. Most importantly, this study explores the application of SFE-GCN in multi-dimensional organ image analysis, including organ-based dynamic morphological analysis, organ-wise light fluence correction and segmentation-enhanced spectral un-mixing. Code will be released at <span><span>https://github.com/lzc-smu/SFEGCN.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103402"},"PeriodicalIF":10.7,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142846732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalized predictions of Glioblastoma infiltration: Mathematical models, Physics-Informed Neural Networks and multimodal scans 胶质母细胞瘤浸润的个性化预测:数学模型、物理信息神经网络和多模态扫描。
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-12-12 DOI: 10.1016/j.media.2024.103423
Ray Zirui Zhang , Ivan Ezhov , Michal Balcerak , Andy Zhu , Benedikt Wiestler , Bjoern Menze , John S. Lowengrub
{"title":"Personalized predictions of Glioblastoma infiltration: Mathematical models, Physics-Informed Neural Networks and multimodal scans","authors":"Ray Zirui Zhang ,&nbsp;Ivan Ezhov ,&nbsp;Michal Balcerak ,&nbsp;Andy Zhu ,&nbsp;Benedikt Wiestler ,&nbsp;Bjoern Menze ,&nbsp;John S. Lowengrub","doi":"10.1016/j.media.2024.103423","DOIUrl":"10.1016/j.media.2024.103423","url":null,"abstract":"<div><div>Predicting the infiltration of Glioblastoma (GBM) from medical MRI scans is crucial for understanding tumor growth dynamics and designing personalized radiotherapy treatment plans. Mathematical models of GBM growth can complement the data in the prediction of spatial distributions of tumor cells. However, this requires estimating patient-specific parameters of the model from clinical data, which is a challenging inverse problem due to limited temporal data and the limited time between imaging and diagnosis. This work proposes a method that uses Physics-Informed Neural Networks (PINNs) to estimate patient-specific parameters of a reaction–diffusion partial differential equation (PDE) model of GBM growth from a single 3D structural MRI snapshot. PINNs embed both the data and the PDE into a loss function, thus integrating theory and data. Key innovations include the identification and estimation of characteristic non-dimensional parameters, a pre-training step that utilizes the non-dimensional parameters and a fine-tuning step to determine the patient specific parameters. Additionally, the diffuse-domain method is employed to handle the complex brain geometry within the PINN framework. The method is validated on both synthetic and patient datasets, showing promise for personalized GBM treatment through parametric inference within clinically relevant timeframes.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103423"},"PeriodicalIF":10.7,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142864837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrastive machine learning reveals species -shared and -specific brain functional architecture 对比机器学习揭示了物种共享和特定的大脑功能结构。
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-12-12 DOI: 10.1016/j.media.2024.103431
Li Yang , Guannan Cao , Songyao Zhang , Weihan Zhang , Yusong Sun , Jingchao Zhou , Tianyang Zhong , Yixuan Yuan , Tao Liu , Tianming Liu , Lei Guo , Yongchun Yu , Xi Jiang , Gang Li , Junwei Han , Tuo Zhang
{"title":"Contrastive machine learning reveals species -shared and -specific brain functional architecture","authors":"Li Yang ,&nbsp;Guannan Cao ,&nbsp;Songyao Zhang ,&nbsp;Weihan Zhang ,&nbsp;Yusong Sun ,&nbsp;Jingchao Zhou ,&nbsp;Tianyang Zhong ,&nbsp;Yixuan Yuan ,&nbsp;Tao Liu ,&nbsp;Tianming Liu ,&nbsp;Lei Guo ,&nbsp;Yongchun Yu ,&nbsp;Xi Jiang ,&nbsp;Gang Li ,&nbsp;Junwei Han ,&nbsp;Tuo Zhang","doi":"10.1016/j.media.2024.103431","DOIUrl":"10.1016/j.media.2024.103431","url":null,"abstract":"<div><div>A deep comparative analysis of brain functional connectome across species in primates has the potential to yield valuable insights for both scientific and clinical applications. However, the interspecies commonality and differences are inherently entangled with each other and with other irrelevant factors. Here we develop a novel contrastive machine learning method, called shared-unique variation autoencoder (SU-VAE), to allow disentanglement of the species-shared and species-specific functional connectome variation between macaque and human brains on large-scale resting-state fMRI datasets. The method was validated by confirming that human-specific features are differentially related to cognitive scores, while features shared with macaque better capture sensorimotor ones. The projection of disentangled connectomes to the cortex revealed a gradient that reflected species divergence. In contrast to macaque, the introduction of human-specific connectomes to the shared ones enhanced network efficiency. We identified genes enriched on ‘axon guidance’ that could be related to the human-specific connectomes. The code contains the model and analysis can be found in <span><span>https://github.com/BBBBrain/SU-VAE</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103431"},"PeriodicalIF":10.7,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142846527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving cross-domain generalizability of medical image segmentation using uncertainty and shape-aware continual test-time domain adaptation 利用不确定性和形状感知持续测试时间域自适应提高医学图像分割的跨域通用性。
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-12-10 DOI: 10.1016/j.media.2024.103422
Jiayi Zhu , Bart Bolsterlee , Yang Song , Erik Meijering
{"title":"Improving cross-domain generalizability of medical image segmentation using uncertainty and shape-aware continual test-time domain adaptation","authors":"Jiayi Zhu ,&nbsp;Bart Bolsterlee ,&nbsp;Yang Song ,&nbsp;Erik Meijering","doi":"10.1016/j.media.2024.103422","DOIUrl":"10.1016/j.media.2024.103422","url":null,"abstract":"<div><div>Continual test-time adaptation (CTTA) aims to continuously adapt a source-trained model to a target domain with minimal performance loss while assuming no access to the source data. Typically, source models are trained with empirical risk minimization (ERM) and assumed to perform reasonably on the target domain to allow for further adaptation. However, ERM-trained models often fail to perform adequately on a severely drifted target domain, resulting in unsatisfactory adaptation results. To tackle this issue, we propose a generalizable CTTA framework. First, we incorporate domain-invariant shape modeling into the model and train it using domain-generalization (DG) techniques, promoting target-domain adaptability regardless of the severity of the domain shift. Then, an uncertainty and shape-aware mean teacher network performs adaptation with uncertainty-weighted pseudo-labels and shape information. As part of this process, a novel uncertainty-ranked cross-task regularization scheme is proposed to impose consistency between segmentation maps and their corresponding shape representations, both produced by the student model, at the patch and global levels to enhance performance further. Lastly, small portions of the model’s weights are stochastically reset to the initial domain-generalized state at each adaptation step, preventing the model from ‘diving too deep’ into any specific test samples. The proposed method demonstrates strong continual adaptability and outperforms its peers on five cross-domain segmentation tasks, showcasing its effectiveness and generalizability.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103422"},"PeriodicalIF":10.7,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142864795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MoMA: Momentum contrastive learning with multi-head attention-based knowledge distillation for histopathology image analysis 基于多头注意的组织病理学图像知识蒸馏的动量对比学习。
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-12-09 DOI: 10.1016/j.media.2024.103421
Trinh Thi Le Vuong, Jin Tae Kwak
{"title":"MoMA: Momentum contrastive learning with multi-head attention-based knowledge distillation for histopathology image analysis","authors":"Trinh Thi Le Vuong,&nbsp;Jin Tae Kwak","doi":"10.1016/j.media.2024.103421","DOIUrl":"10.1016/j.media.2024.103421","url":null,"abstract":"<div><div>There is no doubt that advanced artificial intelligence models and high quality data are the keys to success in developing computational pathology tools. Although the overall volume of pathology data keeps increasing, a lack of quality data is a common issue when it comes to a specific task due to several reasons including privacy and ethical issues with patient data. In this work, we propose to exploit knowledge distillation, i.e., utilize the existing model to learn a new, target model, to overcome such issues in computational pathology. Specifically, we employ a student–teacher framework to learn a target model from a pre-trained, teacher model without direct access to source data and distill relevant knowledge via momentum contrastive learning with multi-head attention mechanism, which provides consistent and context-aware feature representations. This enables the target model to assimilate informative representations of the teacher model while seamlessly adapting to the unique nuances of the target data. The proposed method is rigorously evaluated across different scenarios where the teacher model was trained on the same, relevant, and irrelevant classification tasks with the target model. Experimental results demonstrate the accuracy and robustness of our approach in transferring knowledge to different domains and tasks, outperforming other related methods. Moreover, the results provide a guideline on the learning strategy for different types of tasks and scenarios in computational pathology.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103421"},"PeriodicalIF":10.7,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-modality visual feature flow for medical report generation 用于医疗报告生成的双模态视觉特征流。
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-12-01 DOI: 10.1016/j.media.2024.103413
Quan Tang , Liming Xu , Yongheng Wang , Bochuan Zheng , Jiancheng Lv , Xianhua Zeng , Weisheng Li
{"title":"Dual-modality visual feature flow for medical report generation","authors":"Quan Tang ,&nbsp;Liming Xu ,&nbsp;Yongheng Wang ,&nbsp;Bochuan Zheng ,&nbsp;Jiancheng Lv ,&nbsp;Xianhua Zeng ,&nbsp;Weisheng Li","doi":"10.1016/j.media.2024.103413","DOIUrl":"10.1016/j.media.2024.103413","url":null,"abstract":"<div><div>Medical report generation, a cross-modal task of generating medical text information, aiming to provide professional descriptions of medical images in clinical language. Despite some methods have made progress, there are still some limitations, including insufficient focus on lesion areas, omission of internal edge features, and difficulty in aligning cross-modal data. To address these issues, we propose Dual-Modality Visual Feature Flow (DMVF) for medical report generation. Firstly, we introduce region-level features based on grid-level features to enhance the method's ability to identify lesions and key areas. Then, we enhance two types of feature flows based on their attributes to prevent the loss of key information, respectively. Finally, we align visual mappings from different visual feature with report textual embeddings through a feature fusion module to perform cross-modal learning. Extensive experiments conducted on four benchmark datasets demonstrate that our approach outperforms the state-of-the-art methods in both natural language generation and clinical efficacy metrics.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103413"},"PeriodicalIF":10.7,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142853978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward automated detection of microbleeds with anatomical scale localization using deep learning 利用深度学习实现解剖尺度定位的微出血自动检测
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-11-30 DOI: 10.1016/j.media.2024.103415
Jun-Ho Kim , Young Noh , Haejoon Lee , Seul Lee , Woo-Ram Kim , Koung Mi Kang , Eung Yeop Kim , Mohammed A. Al-masni , Dong-Hyun Kim
{"title":"Toward automated detection of microbleeds with anatomical scale localization using deep learning","authors":"Jun-Ho Kim ,&nbsp;Young Noh ,&nbsp;Haejoon Lee ,&nbsp;Seul Lee ,&nbsp;Woo-Ram Kim ,&nbsp;Koung Mi Kang ,&nbsp;Eung Yeop Kim ,&nbsp;Mohammed A. Al-masni ,&nbsp;Dong-Hyun Kim","doi":"10.1016/j.media.2024.103415","DOIUrl":"10.1016/j.media.2024.103415","url":null,"abstract":"<div><div>Cerebral Microbleeds (CMBs) are chronic deposits of small blood products in the brain tissues, which have explicit relation to various cerebrovascular diseases depending on their anatomical location, including cognitive decline, intracerebral hemorrhage, and cerebral infarction. However, manual detection of CMBs is a time consuming and error-prone process because of their sparse and tiny structural properties. The detection of CMBs is commonly affected by the presence of many CMB mimics that cause a high false-positive rate (FPR), such as calcifications and pial vessels. This paper proposes a novel 3D deep learning framework that not only detects CMBs but also identifies their anatomical location in the brain (i.e., lobar, deep, and infratentorial regions). For the CMBs detection task, we propose a single end-to-end model by leveraging the 3D U-Net as a backbone with Region Proposal Network (RPN). To significantly reduce the false positives within the same single model, we develop a new scheme, containing Feature Fusion Module (FFM) that detects small candidates utilizing contextual information and Hard Sample Prototype Learning (HSPL) that mines CMB mimics and generates additional loss term called concentration loss using Convolutional Prototype Learning (CPL). For the anatomical localization task, we exploit the 3D U-Net segmentation network to segment anatomical structures of the brain. This task not only identifies to which region the CMBs belong but also eliminates some false positives from the detection task by leveraging anatomical information. We utilize Susceptibility-Weighted Imaging (SWI) and phase images as 3D input to efficiently capture 3D information. The results show that the proposed RPN that utilizes the FFM and HSPL outperforms the baseline RPN and achieves a sensitivity of 94.66 % vs. 93.33 % and an average number of false positives per subject (FP<sub>avg</sub>) of 0.86 vs. 14.73. Furthermore, the anatomical localization task enhances the detection performance by reducing the FP<sub>avg</sub> to 0.56 while maintaining the sensitivity of 94.66 %.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103415"},"PeriodicalIF":10.7,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142789978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative benchmarking of failure detection methods in medical image segmentation: Unveiling the role of confidence aggregation 失效检测方法在医学图像分割中的比较基准:揭示置信聚集的作用。
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-11-30 DOI: 10.1016/j.media.2024.103392
Maximilian Zenk , David Zimmerer , Fabian Isensee , Jeremias Traub , Tobias Norajitra , Paul F. Jäger , Klaus Maier-Hein
{"title":"Comparative benchmarking of failure detection methods in medical image segmentation: Unveiling the role of confidence aggregation","authors":"Maximilian Zenk ,&nbsp;David Zimmerer ,&nbsp;Fabian Isensee ,&nbsp;Jeremias Traub ,&nbsp;Tobias Norajitra ,&nbsp;Paul F. Jäger ,&nbsp;Klaus Maier-Hein","doi":"10.1016/j.media.2024.103392","DOIUrl":"10.1016/j.media.2024.103392","url":null,"abstract":"<div><div>Semantic segmentation is an essential component of medical image analysis research, with recent deep learning algorithms offering out-of-the-box applicability across diverse datasets. Despite these advancements, segmentation failures remain a significant concern for real-world clinical applications, necessitating reliable detection mechanisms. This paper introduces a comprehensive benchmarking framework aimed at evaluating failure detection methodologies within medical image segmentation. Through our analysis, we identify the strengths and limitations of current failure detection metrics, advocating for the risk-coverage analysis as a holistic evaluation approach. Utilizing a collective dataset comprising five public 3D medical image collections, we assess the efficacy of various failure detection strategies under realistic test-time distribution shifts. Our findings highlight the importance of pixel confidence aggregation and we observe superior performance of the pairwise Dice score (Roy et al., 2019) between ensemble predictions, positioning it as a simple and robust baseline for failure detection in medical image segmentation. To promote ongoing research, we make the benchmarking framework available to the community.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103392"},"PeriodicalIF":10.7,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142807657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Outlier detection in cardiac diffusion tensor imaging: Shot rejection or robust fitting? 心脏弥散张量成像的异常值检测:排斥还是稳健拟合?
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-11-30 DOI: 10.1016/j.media.2024.103386
Sam Coveney , Maryam Afzali , Lars Mueller , Irvin Teh , Arka Das , Erica Dall’Armellina , Filip Szczepankiewicz , Derek K. Jones , Jurgen E. Schneider
{"title":"Outlier detection in cardiac diffusion tensor imaging: Shot rejection or robust fitting?","authors":"Sam Coveney ,&nbsp;Maryam Afzali ,&nbsp;Lars Mueller ,&nbsp;Irvin Teh ,&nbsp;Arka Das ,&nbsp;Erica Dall’Armellina ,&nbsp;Filip Szczepankiewicz ,&nbsp;Derek K. Jones ,&nbsp;Jurgen E. Schneider","doi":"10.1016/j.media.2024.103386","DOIUrl":"10.1016/j.media.2024.103386","url":null,"abstract":"<div><div>Cardiac diffusion tensor imaging (cDTI) is highly prone to image corruption, yet robust-fitting methods are rarely used. Single voxel outlier detection (SVOD) can overlook corruptions that are visually obvious, perhaps causing reluctance to replace whole-image shot-rejection (SR) despite its own deficiencies. SVOD’s deficiencies may be relatively unimportant: corrupted signals that are not statistical outliers may not be detrimental. Multiple voxel outlier detection (MVOD), using a local myocardial neighbourhood, may overcome the shared deficiencies of SR and SVOD for cDTI while keeping the benefits of both. Here, robust fitting methods using M-estimators are derived for both non-linear least squares and weighted least squares fitting, and outlier detection is applied using (i) SVOD; and (ii) SVOD and MVOD. These methods, along with non-robust fitting with/without SR, are applied to cDTI datasets from healthy volunteers and hypertrophic cardiomyopathy patients. Robust fitting methods produce larger group differences with more statistical significance for MD, FA, and E2A, versus non-robust methods, with MVOD giving the largest group differences for MD and FA. Visual analysis demonstrates the superiority of robust-fitting methods over SR, especially when it is difficult to partition the images into good and bad sets. Synthetic experiments confirm that MVOD gives lower root-mean-square-error than SVOD.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103386"},"PeriodicalIF":10.7,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142818560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信