Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention最新文献
{"title":"Physics-Informed Neural Networks for Tissue Elasticity Reconstruction in Magnetic Resonance Elastography.","authors":"Matthew Ragoza, Kayhan Batmanghelich","doi":"10.1007/978-3-031-43999-5_32","DOIUrl":"10.1007/978-3-031-43999-5_32","url":null,"abstract":"<p><p>Magnetic resonance elastography (MRE) is a medical imaging modality that non-invasively quantifies tissue stiffness (elasticity) and is commonly used for diagnosing liver fibrosis. Constructing an elasticity map of tissue requires solving an inverse problem involving a partial differential equation (PDE). Current numerical techniques to solve the inverse problem are noise-sensitive and require explicit specification of physical relationships. In this work, we apply physics-informed neural networks to solve the inverse problem of tissue elasticity reconstruction. Our method does not rely on numerical differentiation and can be extended to learn relevant correlations from anatomical images while respecting physical constraints. We evaluate our approach on simulated data and <i>in vivo</i> data from a cohort of patients with non-alcoholic fatty liver disease (NAFLD). Compared to numerical baselines, our method is more robust to noise and more accurate on realistic data, and its performance is further enhanced by incorporating anatomical information.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14229 ","pages":"333-343"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11141115/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141201563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhiwei Deng, Songnan Xu, Jianwei Zhang, Jiong Zhang, Danny J Wang, Lirong Yan, Yonggang Shi
{"title":"Shape-Aware 3D Small Vessel Segmentation with Local Contrast Guided Attention.","authors":"Zhiwei Deng, Songnan Xu, Jianwei Zhang, Jiong Zhang, Danny J Wang, Lirong Yan, Yonggang Shi","doi":"10.1007/978-3-031-43901-8_34","DOIUrl":"10.1007/978-3-031-43901-8_34","url":null,"abstract":"<p><p>The automated segmentation and analysis of small vessels from <i>in vivo</i> imaging data is an important task for many clinical applications. While current filtering and learning methods have achieved good performance on the segmentation of large vessels, they are sub-optimal for small vessel detection due to their apparent geometric irregularity and weak contrast given the relatively limited resolution of existing imaging techniques. In addition, for supervised learning approaches, the acquisition of accurate pixel-wise annotations in these small vascular regions heavily relies on skilled experts. In this work, we propose a novel self-supervised network to tackle these challenges and improve the detection of small vessels from 3D imaging data. First, our network maximizes a novel shape-aware flux-based measure to enhance the estimation of small vasculature with non-circular and irregular appearances. Then, we develop novel local contrast guided attention(LCA) and enhancement(LCE) modules to boost the vesselness responses of vascular regions of low contrast. In our experiments, we compare with four filtering-based methods and a state-of-the-art self-supervised deep learning method in multiple 3D datasets to demonstrate that our method achieves significant improvement in all datasets. Further analysis and ablation studies have also been performed to assess the contributions of various modules to the improved performance in 3D small vessel segmentation. Our code is available at https://github.com/dengchihwei/LCNetVesselSeg.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14223 ","pages":"354-363"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10948105/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140159871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Relaxation-Diffusion Spectrum Imaging for Probing Tissue Microarchitecture.","authors":"Ye Wu, Xiaoming Liu, Xinyuan Zhang, Khoi Minh Huynh, Sahar Ahmad, Pew-Thian Yap","doi":"10.1007/978-3-031-43993-3_15","DOIUrl":"10.1007/978-3-031-43993-3_15","url":null,"abstract":"<p><p>Brain tissue microarchitecture is characterized by heterogeneous degrees of diffusivity and rates of transverse relaxation. Unlike standard diffusion MRI with a single echo time (TE), which provides information primarily on diffusivity, relaxation-diffusion MRI involves multiple TEs and multiple diffusion-weighting strengths for probing tissue-specific coupling between relaxation and diffusivity. Here, we introduce a relaxation-diffusion model that characterizes tissue apparent relaxation coefficients for a spectrum of diffusion length scales and at the same time factors out the effects of intra-voxel orientation heterogeneity. We examined the model with an in vivo dataset, acquired using a clinical scanner, involving different health conditions. Experimental results indicate that our model caters to heterogeneous tissue microstructure and can distinguish fiber bundles with similar diffusivities but different relaxation rates. Code with sample data is available at https://github.com/dryewu/RDSI.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14227 ","pages":"152-162"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11340880/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142057762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Distilling BlackBox to Interpretable Models for Efficient Transfer Learning.","authors":"Shantanu Ghosh, Ke Yu, Kayhan Batmanghelich","doi":"10.1007/978-3-031-43895-0_59","DOIUrl":"10.1007/978-3-031-43895-0_59","url":null,"abstract":"<p><p>Building generalizable AI models is one of the primary challenges in the healthcare domain. While radiologists rely on generalizable descriptive rules of abnormality, Neural Network (NN) models suffer even with a slight shift in input distribution (<i>e.g</i>., scanner type). Fine-tuning a model to transfer knowledge from one domain to another requires a significant amount of labeled data in the target domain. In this paper, we develop an interpretable model that can be efficiently fine-tuned to an unseen target domain with minimal computational cost. We assume the interpretable component of NN to be approximately domain-invariant. However, interpretable models typically underperform compared to their Blackbox (BB) variants. We start with a BB in the source domain and distill it into a <i>mixture</i> of shallow interpretable models using human-understandable concepts. As each interpretable model covers a subset of data, a mixture of interpretable models achieves comparable performance as BB. Further, we use the pseudo-labeling technique from semi-supervised learning (SSL) to learn the concept classifier in the target domain, followed by fine-tuning the interpretable models in the target domain. We evaluate our model using a real-life large-scale chest-X-ray (CXR) classification dataset. The code is available at: https://github.com/batmanlab/MICCAI-2023-Route-interpret-repeat-CXRs.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14221 ","pages":"628-638"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11141113/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141201559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yimu Pan, Tongan Cai, Manas Mehta, Alison D Gernand, Jeffery A Goldstein, Leena Mithal, Delia Mwinyelle, Kelly Gallagher, James Z Wang
{"title":"Enhancing Automatic Placenta Analysis through Distributional Feature Recomposition in Vision-Language Contrastive Learning.","authors":"Yimu Pan, Tongan Cai, Manas Mehta, Alison D Gernand, Jeffery A Goldstein, Leena Mithal, Delia Mwinyelle, Kelly Gallagher, James Z Wang","doi":"10.1007/978-3-031-43987-2_12","DOIUrl":"10.1007/978-3-031-43987-2_12","url":null,"abstract":"<p><p>The placenta is a valuable organ that can aid in understanding adverse events during pregnancy and predicting issues post-birth. Manual pathological examination and report generation, however, are laborious and resource-intensive. Limitations in diagnostic accuracy and model efficiency have impeded previous attempts to automate placenta analysis. This study presents a novel framework for the automatic analysis of placenta images that aims to improve accuracy and efficiency. Building on previous vision-language contrastive learning (VLC) methods, we propose two enhancements, namely Pathology Report Feature Recomposition and Distributional Feature Recomposition, which increase representation robustness and mitigate feature suppression. In addition, we employ efficient neural networks as image encoders to achieve model compression and inference acceleration. Experiments validate that the proposed approach outperforms prior work in both performance and efficiency by significant margins. The benefits of our method, including enhanced efficacy and deployability, may have significant implications for reproductive healthcare, particularly in rural areas or low- and middle-income countries.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14225 ","pages":"116-126"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11192145/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141444014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modularity-Constrained Dynamic Representation Learning for Interpretable Brain Disorder Analysis with Functional MRI.","authors":"Qianqian Wang, Mengqi Wu, Yuqi Fang, Wei Wang, Lishan Qiao, Mingxia Liu","doi":"10.1007/978-3-031-43907-0_5","DOIUrl":"10.1007/978-3-031-43907-0_5","url":null,"abstract":"<p><p>Resting-state functional MRI (rs-fMRI) is increasingly used to detect altered functional connectivity patterns caused by brain disorders, thereby facilitating objective quantification of brain pathology. Existing studies typically extract fMRI features using various machine/deep learning methods, but the generated imaging biomarkers are often challenging to interpret. Besides, the brain operates as a modular system with many cognitive/topological modules, where each module contains subsets of densely inter-connected regions-of-interest (ROIs) that are sparsely connected to ROIs in other modules. However, current methods cannot effectively characterize brain modularity. This paper proposes a modularity-constrained dynamic representation learning (MDRL) framework for interpretable brain disorder analysis with rs-fMRI. The MDRL consists of 3 parts: (1) dynamic graph construction, (2) modularity-constrained spatiotemporal graph neural network (MSGNN) for dynamic feature learning, and (3) prediction and biomarker detection. In particular, the MSGNN is designed to learn spatiotemporal dynamic representations of fMRI, constrained by 3 functional modules (<i>i.e.</i>, central executive network, salience network, and default mode network). To enhance discriminative ability of learned features, we encourage the MSGNN to reconstruct network topology of input graphs. Experimental results on two public and one private datasets with a total of 1,155 subjects validate that our MDRL outperforms several state-of-the-art methods in fMRI-based brain disorder analysis. The detected fMRI biomarkers have good explainability and can be potentially used to improve clinical diagnosis.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14220 ","pages":"46-56"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10883232/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139935019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Khoi Minh Huynh, Ye Wu, Sahar Ahmad, Pew-Thian Yap
{"title":"Microstructure Fingerprinting for Heterogeneously Oriented Tissue Microenvironments.","authors":"Khoi Minh Huynh, Ye Wu, Sahar Ahmad, Pew-Thian Yap","doi":"10.1007/978-3-031-43993-3_13","DOIUrl":"10.1007/978-3-031-43993-3_13","url":null,"abstract":"<p><p>Most diffusion biophysical models capture basic properties of tissue microstructure, such as diffusivity and anisotropy. More realistic models that relate the diffusion-weighted signal to cell size and membrane permeability often require simplifying assumptions such as short gradient pulse and Gaussian phase distribution, leading to tissue features that are not necessarily quantitative. Here, we propose a method to quantify tissue microstructure without jeopardizing accuracy owing to unrealistic assumptions. Our method utilizes realistic signals simulated from the geometries of cellular microenvironments as fingerprints, which are then employed in a spherical mean estimation framework to disentangle the effects of orientation dispersion from microscopic tissue properties. We demonstrate the efficacy of microstructure fingerprinting in estimating intra-cellular, extra-cellular, and intra-soma volume fractions as well as axon radius, soma radius, and membrane permeability.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14227 ","pages":"131-141"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11315459/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141918477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Minhui Yu, Yunbi Liu, Jinjian Wu, Andrea Bozoki, Shijun Qiu, Ling Yue, Mingxia Liu
{"title":"Hybrid Multimodality Fusion with Cross-Domain Knowledge Transfer to Forecast Progression Trajectories in Cognitive Decline.","authors":"Minhui Yu, Yunbi Liu, Jinjian Wu, Andrea Bozoki, Shijun Qiu, Ling Yue, Mingxia Liu","doi":"10.1007/978-3-031-47425-5_24","DOIUrl":"10.1007/978-3-031-47425-5_24","url":null,"abstract":"<p><p>Magnetic resonance imaging (MRI) and positron emission tomography (PET) are increasingly used to forecast progression trajectories of cognitive decline caused by preclinical and prodromal Alzheimer's disease (AD). Many existing studies have explored the potential of these two distinct modalities with diverse machine and deep learning approaches. But successfully fusing MRI and PET can be complex due to their unique characteristics and missing modalities. To this end, we develop a hybrid multimodality fusion (HMF) framework with cross-domain knowledge transfer for joint MRI and PET representation learning, feature fusion, and cognitive decline progression forecasting. Our HMF consists of three modules: 1) a module to impute missing PET images, 2) a module to extract multimodality features from MRI and PET images, and 3) a module to fuse the extracted multimodality features. To address the issue of small sample sizes, we employ a cross-domain knowledge transfer strategy from the ADNI dataset, which includes 795 subjects, to independent small-scale AD-related cohorts, in order to leverage the rich knowledge present within the ADNI. The proposed HMF is extensively evaluated in three AD-related studies with 272 subjects across multiple disease stages, such as subjective cognitive decline and mild cognitive impairment. Experimental results demonstrate the superiority of our method over several state-of-the-art approaches in forecasting progression trajectories of AD-related cognitive decline.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14394 ","pages":"265-275"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10904401/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140023897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bryar Shareef, Min Xian, Aleksandar Vakanski, Haotian Wang
{"title":"Breast Ultrasound Tumor Classification Using a Hybrid Multitask CNN-Transformer Network.","authors":"Bryar Shareef, Min Xian, Aleksandar Vakanski, Haotian Wang","doi":"10.1007/978-3-031-43901-8_33","DOIUrl":"https://doi.org/10.1007/978-3-031-43901-8_33","url":null,"abstract":"<p><p>Capturing global contextual information plays a critical role in breast ultrasound (BUS) image classification. Although convolutional neural networks (CNNs) have demonstrated reliable performance in tumor classification, they have inherent limitations for modeling global and long-range dependencies due to the localized nature of convolution operations. Vision Transformers have an improved capability of capturing global contextual information but may distort the local image patterns due to the tokenization operations. In this study, we proposed a hybrid multitask deep neural network called Hybrid-MT-ESTAN, designed to perform BUS tumor classification and segmentation using a hybrid architecture composed of CNNs and Swin Transformer components. The proposed approach was compared to nine BUS classification methods and evaluated using seven quantitative metrics on a dataset of 3,320 BUS images. The results indicate that Hybrid-MT-ESTAN achieved the highest accuracy, sensitivity, and F1 score of 82.7%, 86.4%, and 86.0%, respectively.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14223 ","pages":"344-353"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11006090/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140871832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joseph Kettelkamp, Ludovica Romanin, Davide Piccini, Sarv Priya, Mathews Jacob
{"title":"Motion Compensated Unsupervised Deep Learning for 5D MRI.","authors":"Joseph Kettelkamp, Ludovica Romanin, Davide Piccini, Sarv Priya, Mathews Jacob","doi":"10.1007/978-3-031-43999-5_40","DOIUrl":"10.1007/978-3-031-43999-5_40","url":null,"abstract":"<p><p>We propose an unsupervised deep learning algorithm for the motion-compensated reconstruction of 5D cardiac MRI data from 3D radial acquisitions. Ungated free-breathing 5D MRI simplifies the scan planning, improves patient comfort, and offers several clinical benefits over breath-held 2D exams, including isotropic spatial resolution and the ability to reslice the data to arbitrary views. However, the current reconstruction algorithms for 5D MRI take very long computational time, and their outcome is greatly dependent on the uniformity of the binning of the acquired data into different physiological phases. The proposed algorithm is a more data-efficient alternative to current motion-resolved reconstructions. This motion-compensated approach models the data in each cardiac/respiratory bin as Fourier samples of the deformed version of a 3D image template. The deformation maps are modeled by a convolutional neural network driven by the physiological phase information. The deformation maps and the template are then jointly estimated from the measured data. The cardiac and respiratory phases are estimated from 1D navigators using an auto-encoder. The proposed algorithm is validated on 5D bSSFP datasets acquired from two subjects.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14229 ","pages":"419-427"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11087022/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140913632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}