Medical image understanding and analysis : 24th Annual Conference, MIUA 2020, Oxford, UK, July 15-17, 2020, Proceedings. Medical Image Understanding and Analysis (Conference) (24th : 2020 : Online)最新文献

筛选
英文 中文
SCorP: Statistics-Informed Dense Correspondence Prediction Directly from Unsegmented Medical Images. SCorP:直接从未分类医学图像进行统计信息密集对应预测。
Krithika Iyer, Jadie Adams, Shireen Y Elhabian
{"title":"SCorP: Statistics-Informed Dense Correspondence Prediction Directly from Unsegmented Medical Images.","authors":"Krithika Iyer, Jadie Adams, Shireen Y Elhabian","doi":"10.1007/978-3-031-66955-2_10","DOIUrl":"https://doi.org/10.1007/978-3-031-66955-2_10","url":null,"abstract":"<p><p>Statistical shape modeling (SSM) is a powerful computational framework for quantifying and analyzing the geometric variability of anatomical structures, facilitating advancements in medical research, diagnostics, and treatment planning. Traditional methods for shape modeling from imaging data demand significant manual and computational resources. Additionally, these methods necessitate repeating the entire modeling pipeline to derive shape descriptors (e.g., surface-based point correspondences) for new data. While deep learning approaches have shown promise in streamlining the construction of SSMs on new data, they still rely on traditional techniques to supervise the training of the deep networks. Moreover, the predominant linearity assumption of traditional approaches restricts their efficacy, a limitation also inherited by deep learning models trained using optimized/established correspondences. Consequently, representing complex anatomies becomes challenging. To address these limitations, we introduce SCorP, a novel framework capable of predicting surface-based correspondences directly from unsegmented images. By leveraging the shape prior learned directly from surface meshes in an unsupervised manner, the proposed model eliminates the need for an optimized shape model for training supervision. The strong shape prior acts as a teacher and regularizes the feature learning of the student network to guide it in learning image-based features that are predictive of surface correspondences. The proposed model streamlines the training and inference phases by removing the supervision for the correspondence prediction task while alleviating the linearity assumption. Experiments on the LGE MRI left atrium dataset and Abdomen CT-1K liver datasets demonstrate that the proposed technique enhances the accuracy and robustness of image-driven SSM, providing a compelling alternative to current fully supervised methods.</p>","PeriodicalId":93335,"journal":{"name":"Medical image understanding and analysis : 24th Annual Conference, MIUA 2020, Oxford, UK, July 15-17, 2020, Proceedings. Medical Image Understanding and Analysis (Conference) (24th : 2020 : Online)","volume":"14859 ","pages":"142-157"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11495401/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142514467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
M-VAAL: Multimodal Variational Adversarial Active Learning for Downstream Medical Image Analysis Tasks. M-VAAL:用于下游医学图像分析任务的多模态变异对抗主动学习。
Bidur Khanal, Binod Bhattarai, Bishesh Khanal, Danail Stoyanov, Cristian A Linte
{"title":"M-VAAL: Multimodal Variational Adversarial Active Learning for Downstream Medical Image Analysis Tasks.","authors":"Bidur Khanal, Binod Bhattarai, Bishesh Khanal, Danail Stoyanov, Cristian A Linte","doi":"10.1007/978-3-031-48593-0_4","DOIUrl":"10.1007/978-3-031-48593-0_4","url":null,"abstract":"<p><p>Acquiring properly annotated data is expensive in the medical field as it requires experts, time-consuming protocols, and rigorous validation. Active learning attempts to minimize the need for large annotated samples by actively sampling the most informative examples for annotation. These examples contribute significantly to improving the performance of supervised machine learning models, and thus, active learning can play an essential role in selecting the most appropriate information in deep learning-based diagnosis, clinical assessments, and treatment planning. Although some existing works have proposed methods for sampling the best examples for annotation in medical image analysis, they are not task-agnostic and do not use multimodal auxiliary information in the sampler, which has the potential to increase robustness. Therefore, in this work, we propose a Multimodal Variational Adversarial Active Learning (M-VAAL) method that uses auxiliary information from additional modalities to enhance the active sampling. We applied our method to two datasets: i) brain tumor segmentation and multi-label classification using the BraTS2018 dataset, and ii) chest X-ray image classification using the COVID-QU-Ex dataset. Our results show a promising direction toward data-efficient learning under limited annotations.</p>","PeriodicalId":93335,"journal":{"name":"Medical image understanding and analysis : 24th Annual Conference, MIUA 2020, Oxford, UK, July 15-17, 2020, Proceedings. Medical Image Understanding and Analysis (Conference) (24th : 2020 : Online)","volume":"14122 ","pages":"48-63"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11328674/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142001558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Medical Image Understanding and Analysis: 26th Annual Conference, MIUA 2022, Cambridge, UK, July 27–29, 2022, Proceedings 医学图像理解与分析:第26届年会,MIUA 2022,剑桥,英国,2022年7月27日至29日,论文集
{"title":"Medical Image Understanding and Analysis: 26th Annual Conference, MIUA 2022, Cambridge, UK, July 27–29, 2022, Proceedings","authors":"","doi":"10.1007/978-3-031-12053-4","DOIUrl":"https://doi.org/10.1007/978-3-031-12053-4","url":null,"abstract":"","PeriodicalId":93335,"journal":{"name":"Medical image understanding and analysis : 24th Annual Conference, MIUA 2020, Oxford, UK, July 15-17, 2020, Proceedings. Medical Image Understanding and Analysis (Conference) (24th : 2020 : Online)","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88845862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Medical Image Understanding and Analysis: 25th Annual Conference, MIUA 2021, Oxford, United Kingdom, July 12–14, 2021, Proceedings 医学图像理解和分析:第25届年会,MIUA 2021,牛津,英国,2021年7月12日至14日,论文集
{"title":"Medical Image Understanding and Analysis: 25th Annual Conference, MIUA 2021, Oxford, United Kingdom, July 12–14, 2021, Proceedings","authors":"","doi":"10.1007/978-3-030-80432-9","DOIUrl":"https://doi.org/10.1007/978-3-030-80432-9","url":null,"abstract":"","PeriodicalId":93335,"journal":{"name":"Medical image understanding and analysis : 24th Annual Conference, MIUA 2020, Oxford, UK, July 15-17, 2020, Proceedings. Medical Image Understanding and Analysis (Conference) (24th : 2020 : Online)","volume":"106 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80731406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Supervised Image Registration Approach for Late Gadolinium Enhanced MRI and Cine Cardiac MRI Using Convolutional Neural Networks. 使用卷积神经网络的钆增强核磁共振成像(Gadolinium Enhanced MRI)后期和心脏核磁共振成像(Cine Cardiac MRI)监督图像注册方法
Roshan Reddy Upendra, Richard Simon, Cristian A Linte
{"title":"A Supervised Image Registration Approach for Late Gadolinium Enhanced MRI and Cine Cardiac MRI Using Convolutional Neural Networks.","authors":"Roshan Reddy Upendra, Richard Simon, Cristian A Linte","doi":"10.1007/978-3-030-52791-4_17","DOIUrl":"10.1007/978-3-030-52791-4_17","url":null,"abstract":"<p><p>Late gadolinium enhanced (LGE) cardiac magnetic resonance (CMR) imaging is the current gold standard for assessing myocardium viability for patients diagnosed with myocardial infarction, myocarditis or cardiomyopathy. This imaging method enables the identification and quantification of myocardial tissue regions that appear hyper-enhanced. However, the delineation of the myocardium is hampered by the reduced contrast between the myocardium and the left ventricle (LV) blood-pool due to the gadolinium-based contrast agent. The balanced-Steady State Free Precession (bSSFP) cine CMR imaging provides high resolution images with superior contrast between the myocardium and the LV blood-pool. Hence, the registration of the LGE CMR images and the bSSFP cine CMR images is a vital step for accurate localization and quantification of the compromised myocardial tissue. Here, we propose a Spatial Transformer Network (STN) inspired convolutional neural network (CNN) architecture to perform supervised registration of bSSFP cine CMR and LGE CMR images. We evaluate our proposed method on the 2019 Multi-Sequence Cardiac Magnetic Resonance Segmentation Challenge (MS-CMRSeg) dataset and use several evaluation metrics, including the center-to-center LV and right ventricle (RV) blood-pool distance, and the contour-to-contour blood-pool and myocardium distance between the LGE and bSSFP CMR images. Specifically, we showed that our registration method reduced the bSSFP to LGE LV blood-pool center distance from 3.28mm before registration to 2.27mm post registration and RV blood-pool center distance from 4.35mm before registration to 2.52mm post registration. We also show that the average surface distance (ASD) between bSSFP and LGE is reduced from 2.53mm to 2.09mm, 1.78mm to 1.40mm and 2.42mm to 1.73mm for LV blood-pool, LV myocardium and RV blood-pool, respectively.</p>","PeriodicalId":93335,"journal":{"name":"Medical image understanding and analysis : 24th Annual Conference, MIUA 2020, Oxford, UK, July 15-17, 2020, Proceedings. Medical Image Understanding and Analysis (Conference) (24th : 2020 : Online)","volume":"1248 ","pages":"208-220"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8285264/pdf/nihms-1705222.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39200416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discovering Unknown Diseases with Explainable Automated Medical Imaging 用可解释的自动医学成像发现未知疾病
C. Tang
{"title":"Discovering Unknown Diseases with Explainable Automated Medical Imaging","authors":"C. Tang","doi":"10.1007/978-3-030-52791-4_27","DOIUrl":"https://doi.org/10.1007/978-3-030-52791-4_27","url":null,"abstract":"","PeriodicalId":93335,"journal":{"name":"Medical image understanding and analysis : 24th Annual Conference, MIUA 2020, Oxford, UK, July 15-17, 2020, Proceedings. Medical Image Understanding and Analysis (Conference) (24th : 2020 : Online)","volume":"4 1","pages":"346 - 358"},"PeriodicalIF":0.0,"publicationDate":"2020-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90363092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Medical Image Understanding and Analysis: 24th Annual Conference, MIUA 2020, Oxford, UK, July 15-17, 2020, Proceedings 医学图像理解与分析:第24届年会,MIUA 2020,牛津,英国,2020年7月15日至17日,会议录
B. Papież, A. Namburete, Simone Diniz Junqueira Barbosa, Phoebe Chen, A. Cuzzocrea, Xiaoyong Du, Orhun Kara, Ting Liu, K. Sivalingam, D. Ślęzak, T. Washio, Xiaokang Yang, Junsong Yuan, R. Prates, Mohammad Yaqub, J. Noble
{"title":"Medical Image Understanding and Analysis: 24th Annual Conference, MIUA 2020, Oxford, UK, July 15-17, 2020, Proceedings","authors":"B. Papież, A. Namburete, Simone Diniz Junqueira Barbosa, Phoebe Chen, A. Cuzzocrea, Xiaoyong Du, Orhun Kara, Ting Liu, K. Sivalingam, D. Ślęzak, T. Washio, Xiaokang Yang, Junsong Yuan, R. Prates, Mohammad Yaqub, J. Noble","doi":"10.1007/978-3-030-52791-4","DOIUrl":"https://doi.org/10.1007/978-3-030-52791-4","url":null,"abstract":"","PeriodicalId":93335,"journal":{"name":"Medical image understanding and analysis : 24th Annual Conference, MIUA 2020, Oxford, UK, July 15-17, 2020, Proceedings. Medical Image Understanding and Analysis (Conference) (24th : 2020 : Online)","volume":"63 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91310035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Medical Image Understanding and Analysis: 23rd Conference, MIUA 2019, Liverpool, UK, July 24–26, 2019, Proceedings 医学图像理解与分析:第23届会议,MIUA 2019,英国利物浦,2019年7月24日至26日,会议录
{"title":"Medical Image Understanding and Analysis: 23rd Conference, MIUA 2019, Liverpool, UK, July 24–26, 2019, Proceedings","authors":"","doi":"10.1007/978-3-030-39343-4","DOIUrl":"https://doi.org/10.1007/978-3-030-39343-4","url":null,"abstract":"","PeriodicalId":93335,"journal":{"name":"Medical image understanding and analysis : 24th Annual Conference, MIUA 2020, Oxford, UK, July 15-17, 2020, Proceedings. Medical Image Understanding and Analysis (Conference) (24th : 2020 : Online)","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88901176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信