Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention最新文献

筛选
英文 中文
Unified Embeddings of Structural and Functional Connectome via a Function-Constrained Structural Graph Variational Auto-Encoder. 通过函数约束结构图变异自动编码器统一嵌入结构和功能连接组
Carlo Amodeo, Igor Fortel, Olusola Ajilore, Liang Zhan, Alex Leow, Theja Tulabandhula
{"title":"Unified Embeddings of Structural and Functional Connectome via a Function-Constrained Structural Graph Variational Auto-Encoder.","authors":"Carlo Amodeo, Igor Fortel, Olusola Ajilore, Liang Zhan, Alex Leow, Theja Tulabandhula","doi":"10.1007/978-3-031-16431-6_39","DOIUrl":"10.1007/978-3-031-16431-6_39","url":null,"abstract":"<p><p>Graph theoretical analyses have become standard tools in modeling functional and anatomical connectivity in the brain. With the advent of connectomics, the primary graphs or networks of interest are structural connectome (derived from DTI tractography) and functional connectome (derived from resting-state fMRI). However, most published connectome studies have focused on either structural or functional connectome, yet complementary information between them, when available in the same dataset, can be jointly leveraged to improve our understanding of the brain. To this end, we propose a function-constrained structural graph variational autoencoder (FCS-GVAE) capable of incorporating information from both functional and structural connectome in an unsupervised fashion. This leads to a joint low-dimensional embedding that establishes a unified spatial coordinate system for comparing across different subjects. We evaluate our approach using the publicly available OASIS-3 Alzheimer's disease (AD) dataset and show that a variational formulation is necessary to optimally encode functional brain dynamics. Further, the proposed joint embedding approach can more accurately distinguish different patient sub-populations than approaches that do not use complementary connectome information.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11246745/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141617891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Brain-Aware Replacements for Supervised Contrastive Learning in Detection of Alzheimer's Disease. 用脑感知替代监督对比学习检测阿尔茨海默病
Mehmet Saygın Seyfioğlu, Zixuan Liu, Pranav Kamath, Sadjyot Gangolli, Sheng Wang, Thomas Grabowski, Linda Shapiro
{"title":"Brain-Aware Replacements for Supervised Contrastive Learning in Detection of Alzheimer's Disease.","authors":"Mehmet Saygın Seyfioğlu, Zixuan Liu, Pranav Kamath, Sadjyot Gangolli, Sheng Wang, Thomas Grabowski, Linda Shapiro","doi":"10.1007/978-3-031-16431-6_44","DOIUrl":"https://doi.org/10.1007/978-3-031-16431-6_44","url":null,"abstract":"<p><p>We propose a novel framework for Alzheimer's disease (AD) detection using brain MRIs. The framework starts with a data augmentation method called Brain-Aware Replacements (BAR), which leverages a standard brain parcellation to replace medically-relevant 3D brain regions in an anchor MRI from a randomly picked MRI to create synthetic samples. Ground truth \"hard\" labels are also linearly mixed depending on the replacement ratio in order to create \"soft\" labels. BAR produces a great variety of realistic-looking synthetic MRIs with higher local variability compared to other mix-based methods, such as CutMix. On top of BAR, we propose using a soft-label-capable supervised contrastive loss, aiming to learn the relative similarity of representations that reflect how mixed are the synthetic MRIs using our soft labels. This way, we do not fully exhaust the entropic capacity of our hard labels, since we only use them to create soft labels and synthetic MRIs through BAR. We show that a model pre-trained using our framework can be further fine-tuned with a cross-entropy loss using the hard labels that were used to create the synthetic samples. We validated the performance of our framework in a binary AD detection task against both from-scratch supervised training and state-of-the-art self-supervised training plus fine-tuning approaches. Then we evaluated BAR's individual performance compared to another mix-based method CutMix by integrating it within our framework. We show that our framework yields superior results in both precision and recall for the AD detection task.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11056282/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140859527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anatomy-Guided Weakly-Supervised Abnormality Localization in Chest X-rays. 胸部 X 光片中的解剖学引导弱监督异常定位技术
Ke Yu, Shantanu Ghosh, Zhexiong Liu, Christopher Deible, Kayhan Batmanghelich
{"title":"Anatomy-Guided Weakly-Supervised Abnormality Localization in Chest X-rays.","authors":"Ke Yu, Shantanu Ghosh, Zhexiong Liu, Christopher Deible, Kayhan Batmanghelich","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Creating a large-scale dataset of abnormality annotation on medical images is a labor-intensive and costly task. Leveraging <i>weak supervision</i> from readily available data such as radiology reports can compensate lack of large-scale data for anomaly detection methods. However, most of the current methods only use image-level pathological observations, failing to utilize the relevant <i>anatomy mentions</i> in reports. Furthermore, Natural Language Processing (NLP)-mined weak labels are noisy due to label sparsity and linguistic ambiguity. We propose an Anatomy-Guided chest X-ray Network (AGXNet) to address these issues of weak annotation. Our framework consists of a cascade of two networks, one responsible for identifying anatomical abnormalities and the second responsible for pathological observations. The critical component in our framework is an anatomy-guided attention module that aids the downstream observation network in focusing on the relevant anatomical regions generated by the anatomy network. We use Positive Unlabeled (PU) learning to account for the fact that lack of mention does not necessarily mean a negative label. Our quantitative and qualitative results on the MIMIC-CXR dataset demonstrate the effectiveness of AGXNet in disease and anatomical abnormality localization. Experiments on the NIH Chest X-ray dataset show that the learned feature representations are transferable and can achieve the state-of-the-art performances in disease classification and competitive disease localization results. Our code is available at https://github.com/batmanlab/AGXNet.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11215940/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141478322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Style Transfer Using Generative Adversarial Networks for Multi-Site MRI Harmonization. 利用生成式对抗网络进行风格转移,实现多站点核磁共振成像协调。
Mengting Liu, Piyush Maiti, Sophia Thomopoulos, Alyssa Zhu, Yaqiong Chai, Hosung Kim, Neda Jahanshad
{"title":"Style Transfer Using Generative Adversarial Networks for Multi-Site MRI Harmonization.","authors":"Mengting Liu, Piyush Maiti, Sophia Thomopoulos, Alyssa Zhu, Yaqiong Chai, Hosung Kim, Neda Jahanshad","doi":"10.1007/978-3-030-87199-4_30","DOIUrl":"10.1007/978-3-030-87199-4_30","url":null,"abstract":"<p><p>Large data initiatives and high-powered brain imaging analyses require the pooling of MR images acquired across multiple scanners, often using different protocols. Prospective cross-site harmonization often involves the use of a phantom or traveling subjects. However, as more datasets are becoming publicly available, there is a growing need for retrospective harmonization, pooling data from sites not originally coordinated together. Several retrospective harmonization techniques have shown promise in removing cross-site image variation. However, most unsupervised methods cannot distinguish between image-acquisition based variability and cross-site population variability, so they require that datasets contain subjects or patient groups with similar clinical or demographic information. To overcome this limitation, we consider cross-site MRI image harmonization as a style transfer problem rather than a domain transfer problem. Using a fully unsupervised deep-learning framework based on a generative adversarial network (GAN), we show that MR images can be harmonized by inserting the style information encoded from a reference image directly, without knowing their site/scanner labels <i>a priori</i>. We trained our model using data from five large-scale multi-site datasets with varied demographics. Results demonstrated that our style-encoding model can harmonize MR images, and match intensity profiles, successfully, without relying on traveling subjects. This model also avoids the need to control for clinical, diagnostic, or demographic information. Moreover, we further demonstrated that if we included diverse enough images into the training set, our method successfully harmonized MR images collected from unseen scanners and protocols, suggesting a promising novel tool for ongoing collaborative studies.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9137427/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139731376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cortical Surface Parcellation using Spherical Convolutional Neural Networks. 使用球面卷积神经网络进行皮层表面分割。
Prasanna Parvathaneni, Shunxing Bao, Vishwesh Nath, Neil D Woodward, Daniel O Claassen, Carissa J Cascio, David H Zald, Yuankai Huo, Bennett A Landman, Ilwoo Lyu
{"title":"Cortical Surface Parcellation using Spherical Convolutional Neural Networks.","authors":"Prasanna Parvathaneni,&nbsp;Shunxing Bao,&nbsp;Vishwesh Nath,&nbsp;Neil D Woodward,&nbsp;Daniel O Claassen,&nbsp;Carissa J Cascio,&nbsp;David H Zald,&nbsp;Yuankai Huo,&nbsp;Bennett A Landman,&nbsp;Ilwoo Lyu","doi":"10.1007/978-3-030-32248-9_56","DOIUrl":"https://doi.org/10.1007/978-3-030-32248-9_56","url":null,"abstract":"<p><p>We present cortical surface parcellation using spherical deep convolutional neural networks. Traditional multi-atlas cortical surface parcellation requires inter-subject surface registration using geometric features with slow processing speed on a single subject (2-3 hours). Moreover, even optimal surface registration does not necessarily produce optimal cortical parcellation as parcel boundaries are not fully matched to the geometric features. In this context, a choice of training features is important for accurate cortical parcellation. To utilize the networks efficiently, we propose cortical parcellation-specific input data from an irregular and complicated structure of cortical surfaces. To this end, we align ground-truth cortical parcel boundaries and use their resulting deformation fields to generate new pairs of deformed geometric features and parcellation maps. To extend the capability of the networks, we then smoothly morph cortical geometric features and parcellation maps using the intermediate deformation fields. We validate our method on 427 adult brains for 49 labels. The experimental results show that our method outperforms traditional multi-atlas and naive spherical U-Net approaches, while achieving full cortical parcellation in less than a minute.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6892466/pdf/nihms-1059107.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49687027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Active Appearance Model Induced Generative Adversarial Network for Controlled Data Augmentation. 用于受控数据增强的主动外观模型诱导的生成对抗性网络。
Jianfei Liu, Christine Shen, Tao Liu, Nancy Aguilera, Johnny Tam
{"title":"Active Appearance Model Induced Generative Adversarial Network for Controlled Data Augmentation.","authors":"Jianfei Liu,&nbsp;Christine Shen,&nbsp;Tao Liu,&nbsp;Nancy Aguilera,&nbsp;Johnny Tam","doi":"10.1007/978-3-030-32239-7_23","DOIUrl":"https://doi.org/10.1007/978-3-030-32239-7_23","url":null,"abstract":"<p><p>Data augmentation is an important strategy for enlarging training datasets in deep learning-based medical image analysis. This is because large, annotated medical datasets are not only difficult and costly to generate, but also quickly become obsolete due to rapid advances in imaging technology. Image-to-image conditional generative adversarial networks (C-GAN) provide a potential solution for data augmentation. However, annotations used as inputs to C-GAN are typically based only on shape information, which can result in undesirable intensity distributions in the resulting artificially-created images. In this paper, we introduce an active cell appearance model (ACAM) that can measure statistical distributions of shape and intensity and use this ACAM model to guide C-GAN to generate more realistic images, which we call A-GAN. A-GAN provides an effective means for conveying anisotropic intensity information to C-GAN. A-GAN incorporates a statistical model (ACAM) to determine how transformations are applied for data augmentation. Traditional approaches for data augmentation that are based on arbitrary transformations might lead to unrealistic shape variations in an augmented dataset that are not representative of real data. A-GAN is designed to ameliorate this. To validate the effectiveness of using A-GAN for data augmentation, we assessed its performance on cell analysis in adaptive optics retinal imaging, which is a rapidly-changing medical imaging modality. Compared to C-GAN, A-GAN achieved stability in fewer iterations. The cell detection and segmentation accuracy when assisted by A-GAN augmentation was higher than that achieved with C-GAN. These findings demonstrate the potential for A-GAN to substantially improve existing data augmentation methods in medical image analysis.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6834374/pdf/nihms-1055537.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49687026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Pancreas Segmentation in MRI using Graph-Based Decision Fusion on Convolutional Neural Networks. 利用基于卷积神经网络的图式决策融合技术在核磁共振成像中进行胰腺分割
Jinzheng Cai, Le Lu, Zizhao Zhang, Fuyong Xing, Lin Yang, Qian Yin
{"title":"Pancreas Segmentation in MRI using Graph-Based Decision Fusion on Convolutional Neural Networks.","authors":"Jinzheng Cai, Le Lu, Zizhao Zhang, Fuyong Xing, Lin Yang, Qian Yin","doi":"10.1007/978-3-319-46723-8_51","DOIUrl":"https://doi.org/10.1007/978-3-319-46723-8_51","url":null,"abstract":"<p><p>Automated pancreas segmentation in medical images is a prerequisite for many clinical applications, such as diabetes inspection, pancreatic cancer diagnosis, and surgical planing. In this paper, we formulate pancreas segmentation in magnetic resonance imaging (MRI) scans as a graph based decision fusion process combined with deep convolutional neural networks (CNN). Our approach conducts pancreatic detection and boundary segmentation with two types of CNN models respectively: 1) the tissue detection step to differentiate pancreas and non-pancreas tissue with spatial intensity context; 2) the boundary detection step to allocate the semantic boundaries of pancreas. Both detection results of the two networks are fused together as the initialization of a conditional random field (CRF) framework to obtain the final segmentation output. Our approach achieves the mean dice similarity coefficient (DSC) 76.1% with the standard deviation of 8.7% in a dataset containing 78 abdominal MRI scans. The proposed algorithm achieves the best results compared with other state of the arts.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5223591/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140195393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Cell Detection and Segmentation in Histopathological Images Using Sparse Reconstruction and Stacked Denoising Autoencoders. 使用稀疏重构和堆叠去噪自动编码器在组织病理学图像中进行稳健的细胞检测和分割
Hai Su, Fuyong Xing, Xiangfei Kong, Yuanpu Xie, Shaoting Zhang, Lin Yang
{"title":"Robust Cell Detection and Segmentation in Histopathological Images Using Sparse Reconstruction and Stacked Denoising Autoencoders.","authors":"Hai Su, Fuyong Xing, Xiangfei Kong, Yuanpu Xie, Shaoting Zhang, Lin Yang","doi":"10.1007/978-3-319-24574-4_46","DOIUrl":"https://doi.org/10.1007/978-3-319-24574-4_46","url":null,"abstract":"<p><p>Computer-aided diagnosis (CAD) is a promising tool for accurate and consistent diagnosis and prognosis. Cell detection and segmentation are essential steps for CAD. These tasks are challenging due to variations in cell shapes, touching cells, and cluttered background. In this paper, we present a cell detection and segmentation algorithm using the sparse reconstruction with trivial templates and a stacked denoising autoencoder (sDAE). The sparse reconstruction handles the shape variations by representing a testing patch as a linear combination of shapes in the learned dictionary. Trivial templates are used to model the touching parts. The sDAE, trained with the original data and their structured labels, is used for cell segmentation. To the best of our knowledge, this is the first study to apply sparse reconstruction and sDAE with structured labels for cell detection and segmentation. The proposed method is extensively tested on two data sets containing more than 3000 cells obtained from brain tumor and lung cancer images. Our algorithm achieves the best performance compared with other state of the arts.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5081214/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140290109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Model-Based Segmentation of the Left and Right Ventricles in Tagged Cardiac MRI. 标记心脏MRI中左心室和右心室的自动模型分割。
Albert Montillo, Dimitris Metaxas, Leon Axel
{"title":"Automated Model-Based Segmentation of the Left and Right Ventricles in Tagged Cardiac MRI.","authors":"Albert Montillo,&nbsp;Dimitris Metaxas,&nbsp;Leon Axel","doi":"10.1007/978-3-540-39899-8_63","DOIUrl":"https://doi.org/10.1007/978-3-540-39899-8_63","url":null,"abstract":"<p><p>We describe an automated, model-based method to segment the left and right ventricles in 4D tagged MR. We fit 3D epicardial and endocardial surface models to ventricle features we extract from the image data. Excellent segmentation is achieved using novel methods that (1) initialize the models and (2) that compute 3D model forces from 2D tagged MR images. The 3D forces guide the models to patient-specific anatomy while the fit is regularized via internal deformation strain energy of a thin plate. Deformation continues until the forces equilibrate or vanish. Validation of the segmentations is performed quantitatively and qualitatively on normal and diseased subjects.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2003-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-540-39899-8_63","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41224576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Automated Segmentation of the Left and Right Ventricles in 4D Cardiac SPAMM Images. 4D心脏SPAMM图像中左心室和右心室的自动分割。
Albert Montillo, Dimitris Metaxas, Leon Axel
{"title":"Automated Segmentation of the Left and Right Ventricles in 4D Cardiac SPAMM Images.","authors":"Albert Montillo,&nbsp;Dimitris Metaxas,&nbsp;Leon Axel","doi":"10.1007/3-540-45786-0_77","DOIUrl":"https://doi.org/10.1007/3-540-45786-0_77","url":null,"abstract":"<p><p>In this paper we describe a completely automated volume-based method for the segmentation of the left and right ventricles in 4D tagged MR (SPAMM) images for quantitative cardiac analysis. We correct the background intensity variation in each volume caused by surface coils using a new scale-based fuzzy connectedness procedure. We apply 3D grayscale opening to the corrected data to create volumes containing only the blood filled regions. We threshold the volumes by minimizing region variance or by an adaptive statistical thresholding method. We isolate the ventricular blood filled regions using a novel approach based on spatial and temporal shape similarity. We use these regions to define the endocardium contours and use them to initialize an active contour that locates the epicardium through the gradient vector flow of an edgemap of a grayscale-closed image. Both quantitative and qualitative results on normal and diseased patients are presented.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/3-540-45786-0_77","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49687025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 59
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信