Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention最新文献

筛选
英文 中文
MUTUAL: Towards Holistic Sensing and Inference in the Operating Room. 互助:在手术室中实现整体感知和推理。
Julien Quarez, Yang Li, Hassna Irzan, Matthew Elliot, Oscar MacCormac, James Knigth, Martin Huber, Toktam Mahmoodi, Prokar Dasgupta, Sebastien Ourselin, Nicholas Raison, Jonathan Shapey, Alejandro Granados
{"title":"MUTUAL: Towards Holistic Sensing and Inference in the Operating Room.","authors":"Julien Quarez, Yang Li, Hassna Irzan, Matthew Elliot, Oscar MacCormac, James Knigth, Martin Huber, Toktam Mahmoodi, Prokar Dasgupta, Sebastien Ourselin, Nicholas Raison, Jonathan Shapey, Alejandro Granados","doi":"10.1007/978-3-031-77610-6_17","DOIUrl":"10.1007/978-3-031-77610-6_17","url":null,"abstract":"<p><p>Embodied AI (E-AI) in the form of intelligent surgical robotics and other agents is calling for data platforms to facilitate its development and deployment. In this work, we present a cross-platform multimodal data recording and streaming software, MUTUAL, successfully deployed on two clinical studies, along with its ROS 2 distributed adaptation, MUTUAL-ROS 2. We describe and compare the two implementations of MUTUAL through their recording performance under different settings. MUTUAL offers robust recording performance at target configurations for multiple modalities, including video, audio, and live expert commentary. While this recording performance is not matched by MUTUAL-ROS 2, we demonstrate its advantages related to real-time streaming capabilities for AI inference and more horizontal scalability, key aspects for E-AI systems in the operating room. Our findings demonstrate that the baseline MUTUAL is well-suited for data curation and offline analysis, whereas MUTUAL-ROS 2, should match the recording reliability of the baseline system under a fully distributed manner where modalities are handled independently by edge computing devices. These insights are critical for advancing the integration of E-AI in surgical practice, ensuring that data infrastructure can support both robust recording and real-time processing needs.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":" ","pages":"178-188"},"PeriodicalIF":0.0,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7617325/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143049392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Zoom Pattern Signatures for Fetal Ultrasound Structures. 胎儿超声结构的缩放模式特征。
Mohammad Alsharid, Robail Yasrab, Lior Drukker, Aris T Papageorghiou, J Alison Noble
{"title":"Zoom Pattern Signatures for Fetal Ultrasound Structures.","authors":"Mohammad Alsharid, Robail Yasrab, Lior Drukker, Aris T Papageorghiou, J Alison Noble","doi":"10.1007/978-3-031-72083-3_73","DOIUrl":"10.1007/978-3-031-72083-3_73","url":null,"abstract":"<p><p>During a fetal ultrasound scan, a sonographer will zoom in and zoom out as they attempt to get clearer images of the anatomical structures of interest. This paper explores how to use this zoom information which is an under-utilised piece of information that is extractable from fetal ultrasound images. We explore associating zooming patterns to specific structures. The presence of such patterns would indicate that each individual anatomical structure has a unique signature associated with it, thereby allowing for classification of fetal ultrasound clips without directly reading the actual fetal ultrasound images in a convolutional neural network.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15004 ","pages":"786-795"},"PeriodicalIF":0.0,"publicationDate":"2024-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7616787/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142635510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimation and Analysis of Slice Propagation Uncertainty in 3D Anatomy Segmentation. 三维解剖分割中切片传播不确定性的估计与分析。
Rachaell Nihalaani, Tushar Kataria, Jadie Adams, Shireen Y Elhabian
{"title":"Estimation and Analysis of Slice Propagation Uncertainty in 3D Anatomy Segmentation.","authors":"Rachaell Nihalaani, Tushar Kataria, Jadie Adams, Shireen Y Elhabian","doi":"10.1007/978-3-031-72117-5_26","DOIUrl":"10.1007/978-3-031-72117-5_26","url":null,"abstract":"<p><p>Supervised methods for 3D anatomy segmentation demonstrate superior performance but are often limited by the availability of annotated data. This limitation has led to a growing interest in self-supervised approaches in tandem with the abundance of available unannotated data. Slice propagation has emerged as a self-supervised approach that leverages slice registration as a self-supervised task to achieve full anatomy segmentation with minimal supervision. This approach significantly reduces the need for domain expertise, time, and the cost associated with building fully annotated datasets required for training segmentation networks. However, this shift toward reduced supervision via deterministic networks raises concerns about the trustworthiness and reliability of predictions, especially when compared with more accurate supervised approaches. To address this concern, we propose integrating calibrated uncertainty quantification (UQ) into slice propagation methods, which would provide insights into the model's predictive reliability and confidence levels. Incorporating uncertainty measures enhances user confidence in self-supervised approaches, thereby improving their practical applicability. We conducted experiments on three datasets for 3D abdominal segmentation using five UQ methods. The results illustrate that incorporating UQ improves not only model trustworthiness but also segmentation accuracy. Furthermore, our analysis reveals various failure modes of slice propagation methods that might not be immediately apparent to end-users. This study opens up new research avenues to improve the accuracy and trustworthiness of slice propagation methods.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15010 ","pages":"273-285"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11520486/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142549934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Active Label Refinement for Robust Training of Imbalanced Medical Image Classification Tasks in the Presence of High Label Noise. 存在高标签噪声的不平衡医学图像分类任务的主动标签改进鲁棒训练。
Bidur Khanal, Tianhong Dai, Binod Bhattarai, Cristian Linte
{"title":"Active Label Refinement for Robust Training of Imbalanced Medical Image Classification Tasks in the Presence of High Label Noise.","authors":"Bidur Khanal, Tianhong Dai, Binod Bhattarai, Cristian Linte","doi":"10.1007/978-3-031-72120-5_4","DOIUrl":"https://doi.org/10.1007/978-3-031-72120-5_4","url":null,"abstract":"<p><p>The robustness of supervised deep learning-based medical image classification is significantly undermined by label noise in the training data. Although several methods have been proposed to enhance classification performance in the presence of noisy labels, they face some challenges: 1) a struggle with class-imbalanced datasets, leading to the frequent overlooking of minority classes as noisy samples; 2) a singular focus on maximizing performance using noisy datasets, without incorporating experts-in-the-loop for actively cleaning the noisy labels. To mitigate these challenges, we propose a two-phase approach that combines Learning with Noisy Labels (LNL) and active learning. This approach not only improves the robustness of medical image classification in the presence of noisy labels but also iteratively improves the quality of the dataset by relabeling the important incorrect labels, under a limited annotation budget. Furthermore, we introduce a novel Variance of Gradients approach in the LNL phase, which complements the loss-based sample selection by also sampling under-represented examples. Using two imbalanced noisy medical classification datasets, we demonstrate that our proposed technique is superior to its predecessors at handling class imbalance by not misidentifying clean samples from minority classes as mostly noisy samples. Code available at: https://github.com/Bidur-Khanal/imbalanced-medical-active-label-cleaning.git.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15011 ","pages":"37-47"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11981598/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144056404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Subtype and Stage Inference for Alzheimer's Disease. 阿尔茨海默病的适应性亚型和分期推断。
Xinkai Wang, Yonggang Shi
{"title":"Adaptive Subtype and Stage Inference for Alzheimer's Disease.","authors":"Xinkai Wang, Yonggang Shi","doi":"10.1007/978-3-031-72384-1_5","DOIUrl":"10.1007/978-3-031-72384-1_5","url":null,"abstract":"<p><p>Subtype and Stage Inference (SuStaIn) is a useful Event-based Model for capturing both the temporal and the phenotypical patterns for any progressive disorders, which is essential for understanding the heterogeneous nature of such diseases. However, this model cannot capture subtypes with different progression rates with respect to predefined biomarkers with fixed events prior to inference. Therefore, we propose an adaptive algorithm for learning subtype-specific events while making subtype and stage inference. We use simulation to demonstrate the improvement with respect to various performance metrics. Finally, we provide snapshots of different levels of biomarker abnormality within different subtypes on Alzheimer's Disease (AD) data to demonstrate the effectiveness of our algorithm.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15003 ","pages":"46-55"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11632966/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PRISM: A Promptable and Robust Interactive Segmentation Model with Visual Prompts. PRISM:一个具有视觉提示的可提示和健壮的交互式分割模型。
Hao Li, Han Liu, Dewei Hu, Jiacheng Wang, Ipek Oguz
{"title":"PRISM: A Promptable and Robust Interactive Segmentation Model with Visual Prompts.","authors":"Hao Li, Han Liu, Dewei Hu, Jiacheng Wang, Ipek Oguz","doi":"10.1007/978-3-031-72384-1_37","DOIUrl":"10.1007/978-3-031-72384-1_37","url":null,"abstract":"<p><p>In this paper, we present PRISM, a <b>P</b>romptable and <b>R</b>obust <b>I</b>nteractive <b>S</b>egmentation <b>M</b>odel, aiming for precise segmentation of 3D medical images. PRISM accepts various visual inputs, including points, boxes, and scribbles as sparse prompts, as well as masks as dense prompts. Specifically, PRISM is designed with four principles to achieve robustness: (1) Iterative learning. The model produces segmentations by using visual prompts from previous iterations to achieve progressive improvement. (2) Confidence learning. PRISM employs multiple segmentation heads per input image, each generating a continuous map and a confidence score to optimize predictions. (3) Corrective learning. Following each segmentation iteration, PRISM employs a shallow corrective refinement network to reassign mislabeled voxels. (4) Hybrid design. PRISM integrates hybrid encoders to better capture both the local and global information. Comprehensive validation of PRISM is conducted using four public datasets for tumor segmentation in the colon, pancreas, liver, and kidney, highlighting challenges caused by anatomical variations and ambiguous boundaries in accurate tumor identification. Compared to state-of-the-art methods, both with and without prompt engineering, PRISM significantly improves performance, achieving results that are close to human levels. The code is publicly available at https://github.com/MedICL-VU/PRISM.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15003 ","pages":"389-399"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12128912/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144217993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tagged-to-Cine MRI Sequence Synthesis via Light Spatial-Temporal Transformer. 通过光时空变换器实现标记-正片磁共振成像序列合成
Xiaofeng Liu, Fangxu Xing, Zhangxing Bian, Tomas Arias-Vergara, Paula Andrea Pérez-Toro, Andreas Maier, Maureen Stone, Jiachen Zhuo, Jerry L Prince, Jonghye Woo
{"title":"Tagged-to-Cine MRI Sequence Synthesis via Light Spatial-Temporal Transformer.","authors":"Xiaofeng Liu, Fangxu Xing, Zhangxing Bian, Tomas Arias-Vergara, Paula Andrea Pérez-Toro, Andreas Maier, Maureen Stone, Jiachen Zhuo, Jerry L Prince, Jonghye Woo","doi":"10.1007/978-3-031-72104-5_67","DOIUrl":"10.1007/978-3-031-72104-5_67","url":null,"abstract":"<p><p>Tagged magnetic resonance imaging (MRI) has been successfully used to track the motion of internal tissue points within moving organs. Typically, to analyze motion using tagged MRI, cine MRI data in the same coordinate system are acquired, incurring additional time and costs. Consequently, tagged-to-cine MR synthesis holds the potential to reduce the extra acquisition time and costs associated with cine MRI, without disrupting downstream motion analysis tasks. Previous approaches have processed each frame independently, thereby overlooking the fact that complementary information from occluded regions of the tag patterns could be present in neighboring frames exhibiting motion. Furthermore, the inconsistent visual appearance, e.g., tag fading, across frames can reduce synthesis performance. To address this, we propose an efficient framework for tagged-to-cine MR sequence synthesis, leveraging both spatial and temporal information with relatively limited data. Specifically, we follow a split-and-integral protocol to balance spatialtemporal modeling efficiency and consistency. The light spatial-temporal transformer (LiST<sup>2</sup>) is designed to exploit the local and global attention in motion sequence with relatively lightweight training parameters. The directional product relative position-time bias is adapted to make the model aware of the spatial-temporal correlation, while the shifted window is used for motion alignment. Then, a recurrent sliding fine-tuning (ReST) scheme is applied to further enhance the temporal consistency. Our framework is evaluated on paired tagged and cine MRI sequences, demonstrating superior performance over comparison methods.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15007 ","pages":"701-711"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11517403/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142524019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CLEFT: Language-Image Contrastive Learning with Efficient Large Language Model and Prompt Fine-Tuning. 基于高效大语言模型和快速微调的语言-图像对比学习。
Yuexi Du, Brian Chang, Nicha C Dvornek
{"title":"CLEFT: Language-Image Contrastive Learning with Efficient Large Language Model and Prompt Fine-Tuning.","authors":"Yuexi Du, Brian Chang, Nicha C Dvornek","doi":"10.1007/978-3-031-72390-2_44","DOIUrl":"10.1007/978-3-031-72390-2_44","url":null,"abstract":"<p><p>Recent advancements in Contrastive Language-Image Pre-training (CLIP) [21] have demonstrated notable success in self-supervised representation learning across various tasks. However, the existing CLIP-like approaches often demand extensive GPU resources and prolonged training times due to the considerable size of the model and dataset, making them poor for medical applications, in which large datasets are not always common. Meanwhile, the language model prompts are mainly manually derived from labels tied to images, potentially overlooking the richness of information within training samples. We introduce a novel language-image Contrastive Learning method with an Efficient large language model and prompt Fine-Tuning (CLEFT) that harnesses the strengths of the extensive pre-trained language and visual models. Furthermore, we present an efficient strategy for learning context-based prompts that mitigates the gap between informative clinical diagnostic data and simple class labels. Our method demonstrates state-of-the-art performance on multiple chest X-ray and mammography datasets compared with various baselines. The proposed parameter efficient framework can reduce the total trainable model size by 39% and reduce the trainable language model to only 4% compared with the current BERT encoder.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15012 ","pages":"465-475"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11709740/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142960994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TLRN: Temporal Latent Residual Networks For Large Deformation Image Registration. TLRN:用于大变形图像配准的时间隐残差网络。
Nian Wu, Jiarui Xing, Miaomiao Zhang
{"title":"TLRN: Temporal Latent Residual Networks For Large Deformation Image Registration.","authors":"Nian Wu, Jiarui Xing, Miaomiao Zhang","doi":"10.1007/978-3-031-72069-7_68","DOIUrl":"10.1007/978-3-031-72069-7_68","url":null,"abstract":"<p><p>This paper presents a novel approach, termed <i>Temporal Latent Residual Network (TLRN)</i>, to predict a sequence of deformation fields in time-series image registration. The challenge of registering time-series images often lies in the occurrence of large motions, especially when images differ significantly from a reference (e.g., the start of a cardiac cycle compared to the peak stretching phase). To achieve accurate and robust registration results, we leverage the nature of motion continuity and exploit the temporal smoothness in consecutive image frames. Our proposed TLRN highlights a temporal residual network with residual blocks carefully designed in latent deformation spaces, which are parameterized by time-sequential initial velocity fields. We treat a sequence of residual blocks over time as a dynamic training system, where each block is designed to learn the residual function between desired deformation features and current input accumulated from previous time frames. We validate the effectivenss of TLRN on both synthetic data and real-world cine cardiac magnetic resonance (CMR) image videos. Our experimental results shows that TLRN is able to achieve substantially improved registration accuracy compared to the state-of-the-art. Our code is publicly available at https://github.com/nellie689/TLRN.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15002 ","pages":"728-738"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11929566/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143694983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HATs: Hierarchical Adaptive Taxonomy Segmentation for Panoramic Pathology Image Analysis. 全景病理图像分析的层次自适应分类分割。
Ruining Deng, Quan Liu, Can Cui, Tianyuan Yao, Juming Xiong, Shunxing Bao, Hao Li, Mengmeng Yin, Yu Wang, Shilin Zhao, Yucheng Tang, Haichun Yang, Yuankai Huo
{"title":"HATs: Hierarchical Adaptive Taxonomy Segmentation for Panoramic Pathology Image Analysis.","authors":"Ruining Deng, Quan Liu, Can Cui, Tianyuan Yao, Juming Xiong, Shunxing Bao, Hao Li, Mengmeng Yin, Yu Wang, Shilin Zhao, Yucheng Tang, Haichun Yang, Yuankai Huo","doi":"10.1007/978-3-031-72083-3_15","DOIUrl":"10.1007/978-3-031-72083-3_15","url":null,"abstract":"<p><p>Panoramic image segmentation in computational pathology presents a remarkable challenge due to the morphologically complex and variably scaled anatomy. For instance, the intricate organization in kidney pathology spans multiple layers, from regions like the cortex and medulla to functional units such as glomeruli, tubules, and vessels, down to various cell types. In this paper, we propose a novel Hierarchical Adaptive Taxonomy Segmentation (HATs) method, which is designed to thoroughly segment panoramic views of kidney structures by leveraging detailed anatomical insights. Our approach entails (1) the innovative HATs technique which translates spatial relationships among 15 distinct object classes into a versatile \"plug-and-play\" loss function that spans across regions, functional units, and cells, (2) the incorporation of anatomical hierarchies and scale considerations into a unified simple matrix representation for all panoramic entities, (3) the adoption of the latest AI foundation model (EfficientSAM) as a feature extraction tool to boost the model's adaptability, yet eliminating the need for manual prompt generation in conventional segment anything model (SAM). Experimental findings demonstrate that the HATs method offers an efficient and effective strategy for integrating clinical insights and imaging precedents into a unified segmentation model across more than 15 categories. The official implementation is publicly available at https://github.com/hrlblab/HATs.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15004 ","pages":"155-166"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11927787/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143694985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信