Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention最新文献
{"title":"FastSAM3D: An Efficient Segment Anything Model for 3D Volumetric Medical Images.","authors":"Yiqing Shen, Jingxing Li, Xinyuan Shao, Blanca Inigo Romillo, Ankush Jindal, David Dreizin, Mathias Unberath","doi":"10.1007/978-3-031-72390-2_51","DOIUrl":"10.1007/978-3-031-72390-2_51","url":null,"abstract":"<p><p>Segment anything models (SAMs) are gaining attention for their zero-shot generalization capability in segmenting objects of unseen classes and in unseen domains when properly prompted. Interactivity is a key strength of SAMs, allowing users to iteratively provide prompts that specify objects of interest to refine outputs. However, to realize the interactive use of SAMs for 3D medical imaging tasks, rapid inference times are necessary. High memory requirements and long processing delays remain constraints that hinder the adoption of SAMs for this purpose. Specifically, while 2D SAMs applied to 3D volumes contend with repetitive computation to process all slices independently, 3D SAMs suffer from an exponential increase in model parameters and FLOPS. To address these challenges, we present FastSAM3D which accelerates SAM inference to 8 milliseconds per 128 × 128 × 128 3D volumetric image on an NVIDIA A100 GPU. This speedup is accomplished through 1) a novel layer-wise progressive distillation scheme that enables knowledge transfer from a complex 12-layer ViT-B to a lightweight 6-layer ViT-Tiny variant encoder without training from scratch; and 2) a novel 3D sparse flash attention to replace vanilla attention operators, substantially reducing memory needs and improving parallelization. Experiments on three diverse datasets reveal that FastSAM3D achieves a remarkable speedup of 527.38× compared to 2D SAMs and 8.75× compared to 3D SAMs on the same volumes without significant performance decline. Thus, FastSAM3D opens the door for low-cost truly interactive SAM-based 3D medical imaging segmentation with commonly used GPU hardware. Code is available at https://github.com/arcadelab/FastSAM3D.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15012 ","pages":"542-552"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12377522/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144984624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing Whole Slide Image Classification with Discriminative and Contrastive Learning.","authors":"Peixian Liang, Hao Zheng, Hongming Li, Yuxin Gong, Spyridon Bakas, Yong Fan","doi":"10.1007/978-3-031-72083-3_10","DOIUrl":"10.1007/978-3-031-72083-3_10","url":null,"abstract":"<p><p>Whole slide image (WSI) classification plays a crucial role in digital pathology data analysis. However, the immense size of WSIs and the absence of fine-grained sub-region labels pose significant challenges for accurate WSI classification. Typical classification-driven deep learning methods often struggle to generate informative image representations, which can compromise the robustness of WSI classification. In this study, we address this challenge by incorporating both discriminative and contrastive learning techniques for WSI classification. Different from the existing contrastive learning methods for WSI classification that primarily rely on pseudo labels assigned to patches based on the WSI-level labels, our approach takes a different route to directly focus on constructing positive and negative samples at the WSI-level. Specifically, we select a subset of representative image patches to represent WSIs and create positive and negative samples at the WSI-level, facilitating effective learning of informative image features. Experimental results on two datasets and ablation studies have demonstrated that our method significantly improved the WSI classification performance compared to state-of-the-art deep learning methods and enabled learning of informative features that promoted robustness of the WSI classification.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15004 ","pages":"102-112"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11877581/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143568358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alex Ling Yu Hung, Haoxin Zheng, Kai Zhao, Kaifeng Pang, Demetri Terzopoulos, Kyunghyun Sung
{"title":"Cross-Slice Attention and Evidential Critical Loss for Uncertainty-Aware Prostate Cancer Detection.","authors":"Alex Ling Yu Hung, Haoxin Zheng, Kai Zhao, Kaifeng Pang, Demetri Terzopoulos, Kyunghyun Sung","doi":"10.1007/978-3-031-72111-3_11","DOIUrl":"10.1007/978-3-031-72111-3_11","url":null,"abstract":"<p><p>Current deep learning-based models typically analyze medical images in either 2D or 3D albeit disregarding volumetric information or suffering sub-optimal performance due to the anisotropic resolution of MR data. Furthermore, providing an accurate uncertainty estimation is beneficial to clinicians, as it indicates how confident a model is about its prediction. We propose a novel 2.5D cross-slice attention model that utilizes both global and local information, along with an evidential critical loss, to perform evidential deep learning for the detection in MR images of prostate cancer, one of the most common cancers and a leading cause of cancer-related death in men. We perform extensive experiments with our model on two different datasets and achieve state-of-the-art performance in prostate cancer detection along with improved epistemic uncertainty estimation. The implementation of the model is available at https://github.com/aL3x-O-o-Hung/GLCSA_ECLoss.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15008 ","pages":"113-123"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11646698/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142831545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Behzad Hejrati, Soumyanil Banerjee, Carri Glide-Hurst, Ming Dong
{"title":"Conditional Diffusion Model with Spatial Attention and Latent Embedding for Medical Image Segmentation.","authors":"Behzad Hejrati, Soumyanil Banerjee, Carri Glide-Hurst, Ming Dong","doi":"10.1007/978-3-031-72114-4_20","DOIUrl":"10.1007/978-3-031-72114-4_20","url":null,"abstract":"<p><p>Diffusion models have been used extensively for high quality image and video generation tasks. In this paper, we propose a novel conditional diffusion model with spatial attention and latent embedding (cDAL) for medical image segmentation. In cDAL, a convolutional neural network (CNN) based discriminator is used at every time-step of the diffusion process to distinguish between the generated labels and the real ones. A spatial attention map is computed based on the features learned by the discriminator to help cDAL generate more accurate segmentation of discriminative regions in an input image. Additionally, we incorporated a random latent embedding into each layer of our model to significantly reduce the number of training and sampling time-steps, thereby making it much faster than other diffusion models for image segmentation. We applied cDAL on 3 publicly available medical image segmentation datasets (MoNuSeg, Chest X-ray and Hippocampus) and observed significant qualitative and quantitative improvements with higher Dice scores and mIoU over the state-of-the-art algorithms. The source code is publicly available at https://github.com/Hejrati/cDAL/.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15009 ","pages":"202-212"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11974562/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143805308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhengwang Wu, Jiale Cheng, Fenqiang Zhao, Ya Wang, Yue Sun, Dajiang Zhu, Tianming Liu, Valerie Jewells, Weili Lin, Li Wang, Gang Li
{"title":"Weakly Supervised Cerebellar Cortical Surface Parcellation with Self-Visual Representation Learning.","authors":"Zhengwang Wu, Jiale Cheng, Fenqiang Zhao, Ya Wang, Yue Sun, Dajiang Zhu, Tianming Liu, Valerie Jewells, Weili Lin, Li Wang, Gang Li","doi":"10.1007/978-3-031-43993-3_42","DOIUrl":"https://doi.org/10.1007/978-3-031-43993-3_42","url":null,"abstract":"<p><p>The cerebellum (i.e., little brain) plays an important role in motion and balances control abilities, despite its much smaller size and deeper sulci compared to the cerebrum. Previous cerebellum studies mainly relied on and focused on conventional volumetric analysis, which ignores the extremely deep and highly convoluted nature of the cerebellar cortex. To better reveal localized functional and structural changes, we propose cortical surface-based analysis of the cerebellar cortex. Specifically, we first reconstruct the cerebellar cortical surfaces to represent and characterize the highly folded cerebellar cortex in a geometrically accurate and topologically correct manner. Then, we propose a novel method to automatically parcellate the cerebellar cortical surface into anatomically meaningful regions by a weakly supervised graph convolutional neural network. Instead of relying on registration or requiring mapping the cerebellar surface to a sphere, which are either inaccurate or have large geometric distortions due to the deep cerebellar sulci, our learning-based model directly deals with the original cerebellar cortical surface by decomposing this challenging task into two steps. First, we learn the effective representation of the cerebellar cortical surface patches with a contrastive self-learning framework. Then, we map the learned representations to parcellation labels. We have validated our method using data from the Baby Connectome Project and the experimental results demonstrate its superior effectiveness and accuracy, compared to existing methods.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14227 ","pages":"429-438"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12030008/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144036370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MRIS: A Multi-modal Retrieval Approach for Image Synthesis on Diverse Modalities.","authors":"Boqi Chen, Marc Niethammer","doi":"10.1007/978-3-031-43999-5_26","DOIUrl":"10.1007/978-3-031-43999-5_26","url":null,"abstract":"<p><p>Multiple imaging modalities are often used for disease diagnosis, prediction, or population-based analyses. However, not all modalities might be available due to cost, different study designs, or changes in imaging technology. If the differences between the types of imaging are small, data harmonization approaches can be used; for larger changes, direct image synthesis approaches have been explored. In this paper, we develop an approach based on multi-modal metric learning to synthesize images of diverse modalities. We use metric learning via multi-modal image retrieval, resulting in embeddings that can relate images of different modalities. Given a large image database, the learned image embeddings allow us to use k-nearest neighbor (<i>k</i>-NN) regression for image synthesis. Our driving medical problem is knee osteoarthritis (KOA), but our developed method is general after proper image alignment. We test our approach by synthesizing cartilage thickness maps obtained from 3D magnetic resonance (MR) images using 2D radiographs. Our experiments show that the proposed method outperforms direct image synthesis and that the synthesized thickness maps retain information relevant to downstream tasks such as progression prediction and Kellgren-Lawrence grading (KLG). Our results suggest that retrieval approaches can be used to obtain high-quality and meaningful image synthesis results given large image databases.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14229 ","pages":"271-281"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11378323/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142157088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gregory Holste, Ziyu Jiang, Ajay Jaiswal, Maria Hanna, Shlomo Minkowitz, Alan C Legasto, Joanna G Escalon, Sharon Steinberger, Mark Bittman, Thomas C Shen, Ying Ding, Ronald M Summers, George Shih, Yifan Peng, Zhangyang Wang
{"title":"How Does Pruning Impact Long-Tailed Multi-label Medical Image Classifiers?","authors":"Gregory Holste, Ziyu Jiang, Ajay Jaiswal, Maria Hanna, Shlomo Minkowitz, Alan C Legasto, Joanna G Escalon, Sharon Steinberger, Mark Bittman, Thomas C Shen, Ying Ding, Ronald M Summers, George Shih, Yifan Peng, Zhangyang Wang","doi":"10.1007/978-3-031-43904-9_64","DOIUrl":"10.1007/978-3-031-43904-9_64","url":null,"abstract":"<p><p>Pruning has emerged as a powerful technique for compressing deep neural networks, reducing memory usage and inference time without significantly affecting overall performance. However, the nuanced ways in which pruning impacts model behavior are not well understood, particularly for <i>long-tailed</i>, <i>multi-label</i> datasets commonly found in clinical settings. This knowledge gap could have dangerous implications when deploying a pruned model for diagnosis, where unexpected model behavior could impact patient well-being. To fill this gap, we perform the first analysis of pruning's effect on neural networks trained to diagnose thorax diseases from chest X-rays (CXRs). On two large CXR datasets, we examine which diseases are most affected by pruning and characterize class \"forgettability\" based on disease frequency and co-occurrence behavior. Further, we identify individual CXRs where uncompressed and heavily pruned models disagree, known as pruning-identified exemplars (PIEs), and conduct a human reader study to evaluate their unifying qualities. We find that radiologists perceive PIEs as having more label noise, lower image quality, and higher diagnosis difficulty. This work represents a first step toward understanding the impact of pruning on model behavior in deep long-tailed, multi-label medical image classification. All code, model weights, and data access instructions can be found at https://github.com/VITA-Group/PruneCXR.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14224 ","pages":"663-673"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10568970/pdf/nihms-1936096.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41224575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhiwei Deng, Songnan Xu, Jianwei Zhang, Jiong Zhang, Danny J Wang, Lirong Yan, Yonggang Shi
{"title":"Shape-Aware 3D Small Vessel Segmentation with Local Contrast Guided Attention.","authors":"Zhiwei Deng, Songnan Xu, Jianwei Zhang, Jiong Zhang, Danny J Wang, Lirong Yan, Yonggang Shi","doi":"10.1007/978-3-031-43901-8_34","DOIUrl":"10.1007/978-3-031-43901-8_34","url":null,"abstract":"<p><p>The automated segmentation and analysis of small vessels from <i>in vivo</i> imaging data is an important task for many clinical applications. While current filtering and learning methods have achieved good performance on the segmentation of large vessels, they are sub-optimal for small vessel detection due to their apparent geometric irregularity and weak contrast given the relatively limited resolution of existing imaging techniques. In addition, for supervised learning approaches, the acquisition of accurate pixel-wise annotations in these small vascular regions heavily relies on skilled experts. In this work, we propose a novel self-supervised network to tackle these challenges and improve the detection of small vessels from 3D imaging data. First, our network maximizes a novel shape-aware flux-based measure to enhance the estimation of small vasculature with non-circular and irregular appearances. Then, we develop novel local contrast guided attention(LCA) and enhancement(LCE) modules to boost the vesselness responses of vascular regions of low contrast. In our experiments, we compare with four filtering-based methods and a state-of-the-art self-supervised deep learning method in multiple 3D datasets to demonstrate that our method achieves significant improvement in all datasets. Further analysis and ablation studies have also been performed to assess the contributions of various modules to the improved performance in 3D small vessel segmentation. Our code is available at https://github.com/dengchihwei/LCNetVesselSeg.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14223 ","pages":"354-363"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10948105/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140159871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ruining Deng, Yanwei Li, Peize Li, Jiacheng Wang, Lucas W Remedios, Saydolimkhon Agzamkhodjaev, Zuhayr Asad, Quan Liu, Can Cui, Yaohong Wang, Yihan Wang, Yucheng Tang, Haichun Yang, Yuankai Huo
{"title":"Democratizing Pathological Image Segmentation with Lay Annotators via Molecular-empowered Learning.","authors":"Ruining Deng, Yanwei Li, Peize Li, Jiacheng Wang, Lucas W Remedios, Saydolimkhon Agzamkhodjaev, Zuhayr Asad, Quan Liu, Can Cui, Yaohong Wang, Yihan Wang, Yucheng Tang, Haichun Yang, Yuankai Huo","doi":"10.1007/978-3-031-43987-2_48","DOIUrl":"10.1007/978-3-031-43987-2_48","url":null,"abstract":"<p><p>Multi-class cell segmentation in high-resolution Giga-pixel whole slide images (WSI) is critical for various clinical applications. Training such an AI model typically requires labor-intensive pixel-wise manual annotation from experienced domain experts (e.g., pathologists). Moreover, such annotation is error-prone when differentiating fine-grained cell types (e.g., podocyte and mesangial cells) via the naked human eye. In this study, we assess the feasibility of democratizing pathological AI deployment by only using lay annotators (annotators without medical domain knowledge). The contribution of this paper is threefold: (1) We proposed a molecular-empowered learning scheme for multi-class cell segmentation using partial labels from lay annotators; (2) The proposed method integrated Giga-pixel level molecular-morphology cross-modality registration, molecular-informed annotation, and molecular-oriented segmentation model, so as to achieve significantly superior performance via 3 lay annotators as compared with 2 experienced pathologists; (3) A deep corrective learning (learning with imperfect label) method is proposed to further improve the segmentation performance using partially annotated noisy data. From the experimental results, our learning method achieved F1 = 0.8496 using molecular-informed annotations from lay annotators, which is better than conventional morphology-based annotations (F1 = 0.7015) from experienced pathologists. Our method democratizes the development of a pathological segmentation deep model to the lay annotator level, which consequently scales up the learning process similar to a non-medical computer vision task. The official implementation and cell annotations are publicly available at https://github.com/hrlblab/MolecularEL.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14225 ","pages":"497-507"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10961594/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140290108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yubo Fan, Jianing Wang, Yiyuan Zhao, Rui Li, Han Liu, Robert F Labadie, Jack H Noble, Benoit M Dawant
{"title":"A Unified Deep-Learning-Based Framework for Cochlear Implant Electrode Array Localization.","authors":"Yubo Fan, Jianing Wang, Yiyuan Zhao, Rui Li, Han Liu, Robert F Labadie, Jack H Noble, Benoit M Dawant","doi":"10.1007/978-3-031-43996-4_36","DOIUrl":"10.1007/978-3-031-43996-4_36","url":null,"abstract":"<p><p>Cochlear implants (CIs) are neuroprosthetics that can provide a sense of sound to people with severe-to-profound hearing loss. A CI contains an electrode array (EA) that is threaded into the cochlea during surgery. Recent studies have shown that hearing outcomes are correlated with EA placement. An image-guided cochlear implant programming technique is based on this correlation and utilizes the EA location with respect to the intracochlear anatomy to help audiologists adjust the CI settings to improve hearing. Automated methods to localize EA in postoperative CT images are of great interest for large-scale studies and for translation into the clinical workflow. In this work, we propose a unified deep-learning-based framework for automated EA localization. It consists of a multi-task network and a series of postprocessing algorithms to localize various types of EAs. The evaluation on a dataset with 27 cadaveric samples shows that its localization error is slightly smaller than the state-of-the-art method. Another evaluation on a large-scale clinical dataset containing 561 cases across two institutions demonstrates a significant improvement in robustness compared to the state-of-the-art method. This suggests that this technique could be integrated into the clinical workflow and provide audiologists with information that facilitates the programming of the implant leading to improved patient care.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14228 ","pages":"376-385"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10976972/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140338426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}