{"title":"SCorP: Statistics-Informed Dense Correspondence Prediction Directly from Unsegmented Medical Images.","authors":"Krithika Iyer, Jadie Adams, Shireen Y Elhabian","doi":"10.1007/978-3-031-66955-2_10","DOIUrl":"https://doi.org/10.1007/978-3-031-66955-2_10","url":null,"abstract":"<p><p>Statistical shape modeling (SSM) is a powerful computational framework for quantifying and analyzing the geometric variability of anatomical structures, facilitating advancements in medical research, diagnostics, and treatment planning. Traditional methods for shape modeling from imaging data demand significant manual and computational resources. Additionally, these methods necessitate repeating the entire modeling pipeline to derive shape descriptors (e.g., surface-based point correspondences) for new data. While deep learning approaches have shown promise in streamlining the construction of SSMs on new data, they still rely on traditional techniques to supervise the training of the deep networks. Moreover, the predominant linearity assumption of traditional approaches restricts their efficacy, a limitation also inherited by deep learning models trained using optimized/established correspondences. Consequently, representing complex anatomies becomes challenging. To address these limitations, we introduce SCorP, a novel framework capable of predicting surface-based correspondences directly from unsegmented images. By leveraging the shape prior learned directly from surface meshes in an unsupervised manner, the proposed model eliminates the need for an optimized shape model for training supervision. The strong shape prior acts as a teacher and regularizes the feature learning of the student network to guide it in learning image-based features that are predictive of surface correspondences. The proposed model streamlines the training and inference phases by removing the supervision for the correspondence prediction task while alleviating the linearity assumption. Experiments on the LGE MRI left atrium dataset and Abdomen CT-1K liver datasets demonstrate that the proposed technique enhances the accuracy and robustness of image-driven SSM, providing a compelling alternative to current fully supervised methods.</p>","PeriodicalId":93335,"journal":{"name":"Medical image understanding and analysis : 24th Annual Conference, MIUA 2020, Oxford, UK, July 15-17, 2020, Proceedings. Medical Image Understanding and Analysis (Conference) (24th : 2020 : Online)","volume":"14859 ","pages":"142-157"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11495401/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142514467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"M-VAAL: Multimodal Variational Adversarial Active Learning for Downstream Medical Image Analysis Tasks.","authors":"Bidur Khanal, Binod Bhattarai, Bishesh Khanal, Danail Stoyanov, Cristian A Linte","doi":"10.1007/978-3-031-48593-0_4","DOIUrl":"10.1007/978-3-031-48593-0_4","url":null,"abstract":"<p><p>Acquiring properly annotated data is expensive in the medical field as it requires experts, time-consuming protocols, and rigorous validation. Active learning attempts to minimize the need for large annotated samples by actively sampling the most informative examples for annotation. These examples contribute significantly to improving the performance of supervised machine learning models, and thus, active learning can play an essential role in selecting the most appropriate information in deep learning-based diagnosis, clinical assessments, and treatment planning. Although some existing works have proposed methods for sampling the best examples for annotation in medical image analysis, they are not task-agnostic and do not use multimodal auxiliary information in the sampler, which has the potential to increase robustness. Therefore, in this work, we propose a Multimodal Variational Adversarial Active Learning (M-VAAL) method that uses auxiliary information from additional modalities to enhance the active sampling. We applied our method to two datasets: i) brain tumor segmentation and multi-label classification using the BraTS2018 dataset, and ii) chest X-ray image classification using the COVID-QU-Ex dataset. Our results show a promising direction toward data-efficient learning under limited annotations.</p>","PeriodicalId":93335,"journal":{"name":"Medical image understanding and analysis : 24th Annual Conference, MIUA 2020, Oxford, UK, July 15-17, 2020, Proceedings. Medical Image Understanding and Analysis (Conference) (24th : 2020 : Online)","volume":"14122 ","pages":"48-63"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11328674/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142001558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roshan Reddy Upendra, Richard Simon, Cristian A Linte
{"title":"A Supervised Image Registration Approach for Late Gadolinium Enhanced MRI and Cine Cardiac MRI Using Convolutional Neural Networks.","authors":"Roshan Reddy Upendra, Richard Simon, Cristian A Linte","doi":"10.1007/978-3-030-52791-4_17","DOIUrl":"10.1007/978-3-030-52791-4_17","url":null,"abstract":"<p><p>Late gadolinium enhanced (LGE) cardiac magnetic resonance (CMR) imaging is the current gold standard for assessing myocardium viability for patients diagnosed with myocardial infarction, myocarditis or cardiomyopathy. This imaging method enables the identification and quantification of myocardial tissue regions that appear hyper-enhanced. However, the delineation of the myocardium is hampered by the reduced contrast between the myocardium and the left ventricle (LV) blood-pool due to the gadolinium-based contrast agent. The balanced-Steady State Free Precession (bSSFP) cine CMR imaging provides high resolution images with superior contrast between the myocardium and the LV blood-pool. Hence, the registration of the LGE CMR images and the bSSFP cine CMR images is a vital step for accurate localization and quantification of the compromised myocardial tissue. Here, we propose a Spatial Transformer Network (STN) inspired convolutional neural network (CNN) architecture to perform supervised registration of bSSFP cine CMR and LGE CMR images. We evaluate our proposed method on the 2019 Multi-Sequence Cardiac Magnetic Resonance Segmentation Challenge (MS-CMRSeg) dataset and use several evaluation metrics, including the center-to-center LV and right ventricle (RV) blood-pool distance, and the contour-to-contour blood-pool and myocardium distance between the LGE and bSSFP CMR images. Specifically, we showed that our registration method reduced the bSSFP to LGE LV blood-pool center distance from 3.28mm before registration to 2.27mm post registration and RV blood-pool center distance from 4.35mm before registration to 2.52mm post registration. We also show that the average surface distance (ASD) between bSSFP and LGE is reduced from 2.53mm to 2.09mm, 1.78mm to 1.40mm and 2.42mm to 1.73mm for LV blood-pool, LV myocardium and RV blood-pool, respectively.</p>","PeriodicalId":93335,"journal":{"name":"Medical image understanding and analysis : 24th Annual Conference, MIUA 2020, Oxford, UK, July 15-17, 2020, Proceedings. Medical Image Understanding and Analysis (Conference) (24th : 2020 : Online)","volume":"1248 ","pages":"208-220"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8285264/pdf/nihms-1705222.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39200416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Papież, A. Namburete, Simone Diniz Junqueira Barbosa, Phoebe Chen, A. Cuzzocrea, Xiaoyong Du, Orhun Kara, Ting Liu, K. Sivalingam, D. Ślęzak, T. Washio, Xiaokang Yang, Junsong Yuan, R. Prates, Mohammad Yaqub, J. Noble
{"title":"Medical Image Understanding and Analysis: 24th Annual Conference, MIUA 2020, Oxford, UK, July 15-17, 2020, Proceedings","authors":"B. Papież, A. Namburete, Simone Diniz Junqueira Barbosa, Phoebe Chen, A. Cuzzocrea, Xiaoyong Du, Orhun Kara, Ting Liu, K. Sivalingam, D. Ślęzak, T. Washio, Xiaokang Yang, Junsong Yuan, R. Prates, Mohammad Yaqub, J. Noble","doi":"10.1007/978-3-030-52791-4","DOIUrl":"https://doi.org/10.1007/978-3-030-52791-4","url":null,"abstract":"","PeriodicalId":93335,"journal":{"name":"Medical image understanding and analysis : 24th Annual Conference, MIUA 2020, Oxford, UK, July 15-17, 2020, Proceedings. Medical Image Understanding and Analysis (Conference) (24th : 2020 : Online)","volume":"63 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91310035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}