{"title":"Probabilistic 3D Correspondence Prediction from Sparse Unsegmented Images.","authors":"Krithika Iyer, Shireen Y Elhabian","doi":"10.1007/978-3-031-73290-4_12","DOIUrl":"10.1007/978-3-031-73290-4_12","url":null,"abstract":"<p><p>The study of physiology demonstrates that the form (shape) of anatomical structures dictates their functions, and analyzing the form of anatomies plays a crucial role in clinical research. Statistical shape modeling (SSM) is a widely used tool for quantitative analysis of forms of anatomies, aiding in characterizing and identifying differences within a population of subjects. Despite its utility, the conventional SSM construction pipeline is often complex and time-consuming. Additionally, reliance on linearity assumptions further limits the model from capturing clinically relevant variations. Recent advancements in deep learning solutions enable the direct inference of SSM from unsegmented medical images, streamlining the process and improving accessibility. However, the new methods of SSM from images do not adequately account for situations where the imaging data quality is poor or where only sparse information is available. Moreover, quantifying aleatoric uncertainty, which represents inherent data variability, is crucial in deploying deep learning for clinical tasks to ensure reliable model predictions and robust decision-making, especially in challenging imaging conditions. Therefore, we propose SPI-CorrNet, a unified model that predicts 3D correspondences from sparse imaging data. It leverages a teacher network to regularize feature learning and quantifies data-dependent aleatoric uncertainty by adapting the network to predict intrinsic input variances. Experiments on the LGE MRI left atrium dataset and Abdomen CT-1K liver datasets demonstrate that our technique enhances the accuracy and robustness of sparse image-driven SSM.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"15242 ","pages":"117-127"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11568407/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142649865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cheng Che Tsai, Xiaoyang Chen, Sahar Ahmad, Pew-Thian Yap
{"title":"Robust Unsupervised Super-Resolution of Infant MRI via Dual-Modal Deep Image Prior.","authors":"Cheng Che Tsai, Xiaoyang Chen, Sahar Ahmad, Pew-Thian Yap","doi":"10.1007/978-3-031-45673-2_5","DOIUrl":"10.1007/978-3-031-45673-2_5","url":null,"abstract":"<p><p>Magnetic resonance imaging (MRI) is commonly used for studying infant brain development. However, due to the lengthy image acquisition time and limited subject compliance, high-quality infant MRI can be challenging. Without imposing additional burden on image acquisition, image super-resolution (SR) can be used to enhance image quality post-acquisition. Most SR techniques are supervised and trained on multiple aligned low-resolution (LR) and high-resolution (HR) image pairs, which in practice are not usually available. Unlike supervised approaches, Deep Image Prior (DIP) can be employed for unsupervised single-image SR, utilizing solely the input LR image for <i>de novo</i> optimization to produce an HR image. However, determining when to stop early in DIP training is non-trivial and presents a challenge to fully automating the SR process. To address this issue, we constrain the low-frequency k-space of the SR image to be similar to that of the LR image. We further improve performance by designing a dual-modal framework that leverages shared anatomical information between T1-weighted and T2-weighted images. We evaluated our model, dual-modal DIP (dmDIP), on infant MRI data acquired from birth to one year of age, demonstrating that enhanced image quality can be obtained with substantially reduced sensitivity to early stopping.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"14348 ","pages":"42-51"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11323077/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MoViT: Memorizing Vision Transformers for Medical Image Analysis.","authors":"Yiqing Shen, Pengfei Guo, Jingpu Wu, Qianqi Huang, Nhat Le, Jinyuan Zhou, Shanshan Jiang, Mathias Unberath","doi":"10.1007/978-3-031-45676-3_21","DOIUrl":"https://doi.org/10.1007/978-3-031-45676-3_21","url":null,"abstract":"<p><p>The synergy of long-range dependencies from transformers and local representations of image content from convolutional neural networks (CNNs) has led to advanced architectures and increased performance for various medical image analysis tasks due to their complementary benefits. However, compared with CNNs, transformers require considerably more training data, due to a larger number of parameters and an absence of inductive bias. The need for increasingly large datasets continues to be problematic, particularly in the context of medical imaging, where both annotation efforts and data protection result in limited data availability. In this work, inspired by the human decision-making process of correlating new \"evidence\" with previously memorized \"experience\", we propose a Memorizing Vision Transformer (MoViT) to alleviate the need for large-scale datasets to successfully train and deploy transformer-based architectures. MoViT leverages an external memory structure to cache history attention snapshots during the training stage. To prevent overfitting, we incorporate an innovative memory update scheme, attention temporal moving average, to update the stored external memories with the historical moving average. For inference speedup, we design a prototypical attention learning method to distill the external memory into smaller representative subsets. We evaluate our method on a public histology image dataset and an in-house MRI dataset, demonstrating that MoViT applied to varied medical image analysis tasks, can outperform vanilla transformer models across varied data regimes, especially in cases where only a small amount of annotated data is available. More importantly, MoViT can reach a competitive performance of ViT with only 3.0% of the training data. In conclusion, MoViT provides a simple plug-in for transformer architectures which may contribute to reducing the training data needed to achieve acceptable models for a broad range of medical image analysis tasks.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"14349 ","pages":"205-213"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11008051/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140863811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Boning Tong, Zhuoping Zhou, Davoud Ataee Tarzanagh, Bojian Hou, Andrew J Saykin, Jason Moore, Marylyn Ritchie, Li Shen
{"title":"Class-Balanced Deep Learning with Adaptive Vector Scaling Loss for Dementia Stage Detection.","authors":"Boning Tong, Zhuoping Zhou, Davoud Ataee Tarzanagh, Bojian Hou, Andrew J Saykin, Jason Moore, Marylyn Ritchie, Li Shen","doi":"10.1007/978-3-031-45676-3_15","DOIUrl":"10.1007/978-3-031-45676-3_15","url":null,"abstract":"<p><p>Alzheimer's disease (AD) leads to irreversible cognitive decline, with Mild Cognitive Impairment (MCI) as its prodromal stage. Early detection of AD and related dementia is crucial for timely treatment and slowing disease progression. However, classifying cognitive normal (CN), MCI, and AD subjects using machine learning models faces class imbalance, necessitating the use of balanced accuracy as a suitable metric. To enhance model performance and balanced accuracy, we introduce a novel method called VS-Opt-Net. This approach incorporates the recently developed vector-scaling (VS) loss into a machine learning pipeline named STREAMLINE. Moreover, it employs Bayesian optimization for hyperparameter learning of both the model and loss function. VS-Opt-Net not only amplifies the contribution of minority examples in proportion to the imbalance level but also addresses the challenge of generalization in training deep networks. In our empirical study, we use MRI-based brain regional measurements as features to conduct the CN vs MCI and AD vs MCI binary classifications. We compare the balanced accuracy of our model with other machine learning models and deep neural network loss functions that also employ class-balanced strategies. Our findings demonstrate that after hyperparameter optimization, the deep neural network using the VS loss function substantially improves balanced accuracy. It also surpasses other models in performance on the AD dataset. Moreover, our feature importance analysis highlights VS-Opt-Net's ability to elucidate biomarker differences across dementia stages.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"14349 ","pages":"144-154"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10924683/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140095306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Bayesian Quantization for Supervised Neuroimage Search.","authors":"Erkun Yang, Cheng Deng, Mingxia Liu","doi":"10.1007/978-3-031-45676-3_40","DOIUrl":"10.1007/978-3-031-45676-3_40","url":null,"abstract":"<p><p>Neuroimage retrieval plays a crucial role in providing physicians with access to previous similar cases, which is essential for case-based reasoning and evidence-based medicine. Due to low computation and storage costs, hashing-based search techniques have been widely adopted for establishing image retrieval systems. However, these methods often suffer from nonnegligible quantization loss, which can degrade the overall search performance. To address this issue, this paper presents a compact coding solution namely <i>Deep Bayesian Quantization</i> (DBQ), which focuses on deep compact quantization that can estimate continuous neuroimage representations and achieve superior performance over existing hashing solutions. Specifically, DBQ seamlessly combines the deep representation learning and the representation compact quantization within a novel Bayesian learning framework, where a proxy embedding-based likelihood function is developed to alleviate the sampling issue for traditional similarity supervision. Additionally, a Gaussian prior is employed to reduce the quantization losses. By utilizing pre-computed lookup tables, the proposed DBQ can enable efficient and effective similarity search. Extensive experiments conducted on 2, 008 structural MRI scans from three benchmark neuroimage datasets demonstrate that our method outperforms previous state-of-the-arts.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"14349 ","pages":"396-406"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10883338/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139934532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anees Kazi, Soroush Farghadani, Iman Aganj, Nassir Navab
{"title":"IA-GCN: Interpretable Attention based Graph Convolutional Network for Disease Prediction.","authors":"Anees Kazi, Soroush Farghadani, Iman Aganj, Nassir Navab","doi":"10.1007/978-3-031-45673-2_38","DOIUrl":"https://doi.org/10.1007/978-3-031-45673-2_38","url":null,"abstract":"<p><p>Interpretability in Graph Convolutional Networks (GCNs) has been explored to some extent in general in computer vision; yet, in the medical domain, it requires further examination. Most of the interpretability approaches for GCNs, especially in the medical domain, focus on interpreting the output of the model in a <i>post</i>-<i>hoc</i> fashion. In this paper, we propose an interpretable attention module (IAM) that explains the relevance of the input features to the classification task on a GNN Model. The model uses these interpretations to improve its performance. In a clinical scenario, such a model can assist the clinical experts in better decision-making for diagnosis and treatment planning. The main novelty lies in the IAM, which directly operates on input features. IAM learns the attention for each feature based on the unique interpretability-specific losses. We show the application of our model on two publicly available datasets, Tadpole and the UK Biobank (UKBB). For Tadpole we choose the task of disease classification, and for UKBB, age, and sex prediction. The proposed model achieves an increase in an average accuracy of 3.2% for Tadpole and 1.6% for UKBB sex and 2% for the UKBB age prediction task compared to the state-of-the-art. Further, we show exhaustive validation and clinical interpretation of our results.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"14348 ","pages":"382-392"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10583839/pdf/nihms-1932968.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49685875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lanhong Yao, Zheyuan Zhang, Ugur Demir, Elif Keles, Camila Vendrami, Emil Agarunov, Candice Bolan, Ivo Schoots, Marc Bruno, Rajesh Keswani, Frank Miller, Tamas Gonda, Cemal Yazici, Temel Tirkes, Michael Wallace, Concetto Spampinato, Ulas Bagci
{"title":"Radiomics Boosts Deep Learning Model for IPMN Classification.","authors":"Lanhong Yao, Zheyuan Zhang, Ugur Demir, Elif Keles, Camila Vendrami, Emil Agarunov, Candice Bolan, Ivo Schoots, Marc Bruno, Rajesh Keswani, Frank Miller, Tamas Gonda, Cemal Yazici, Temel Tirkes, Michael Wallace, Concetto Spampinato, Ulas Bagci","doi":"10.1007/978-3-031-45676-3_14","DOIUrl":"10.1007/978-3-031-45676-3_14","url":null,"abstract":"<p><p>Intraductal Papillary Mucinous Neoplasm (IPMN) cysts are pre-malignant pancreas lesions, and they can progress into pancreatic cancer. Therefore, detecting and stratifying their risk level is of ultimate importance for effective treatment planning and disease control. However, this is a highly challenging task because of the diverse and irregular shape, texture, and size of the IPMN cysts as well as the pancreas. In this study, we propose a novel computer-aided diagnosis pipeline for IPMN risk classification from multi-contrast MRI scans. Our proposed analysis framework includes an efficient volumetric self-adapting segmentation strategy for pancreas delineation, followed by a newly designed deep learning-based classification scheme with a radiomics-based predictive approach. We test our proposed decision-fusion model in multi-center data sets of 246 multi-contrast MRI scans and obtain superior performance to the state of the art (SOTA) in this field. Our ablation studies demonstrate the significance of both radiomics and deep learning modules for achieving the new SOTA performance compared to international guidelines and published studies (81.9% vs 61.3% in accuracy). Our findings have important implications for clinical decision-making. In a series of rigorous experiments on multi-center data sets (246 MRI scans from five centers), we achieved unprecedented performance (81.9% accuracy). The code is available upon publication.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"14349 ","pages":"134-143"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10810260/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139563708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Structural MRI Harmonization via Disentangled Latent Energy-Based Style Translation.","authors":"Mengqi Wu, Lintao Zhang, Pew-Thian Yap, Weili Lin, Hongtu Zhu, Mingxia Liu","doi":"10.1007/978-3-031-45673-2_1","DOIUrl":"10.1007/978-3-031-45673-2_1","url":null,"abstract":"<p><p>Multi-site brain magnetic resonance imaging (MRI) has been widely used in clinical and research domains, but usually is sensitive to non-biological variations caused by site effects (<i>e.g.</i>, field strengths and scanning protocols). Several retrospective data harmonization methods have shown promising results in removing these non-biological variations at feature or whole-image level. Most existing image-level harmonization methods are implemented through generative adversarial networks, which are generally computationally expensive and generalize poorly on independent data. To this end, this paper proposes a disentangled latent energy-based style translation (DLEST) framework for image-level structural MRI harmonization. Specifically, DLEST disentangles <i>site-invariant image generation</i> and <i>site-specific style translation</i> via a latent autoencoder and an energy-based model. The autoencoder learns to encode images into low-dimensional latent space, and generates faithful images from latent codes. The energy-based model is placed in between the encoding and generation steps, facilitating style translation from a source domain to a target domain implicitly. This allows <i>highly generalizable image generation and efficient style translation</i> through the latent space. We train our model on 4,092 T1-weighted MRIs in 3 tasks: histogram comparison, acquisition site classification, and brain tissue segmentation. Qualitative and quantitative results demonstrate the superiority of our approach, which generally outperforms several state-of-the-art methods.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"14348 ","pages":"1-11"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10883146/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139934531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lintao Zhang, Minhui Yu, Lihong Wang, David C Steffens, Rong Wu, Guy G Potter, Mingxia Liu
{"title":"Understanding Clinical Progression of Late-Life Depression to Alzheimer's Disease Over 5 Years with Structural MRI.","authors":"Lintao Zhang, Minhui Yu, Lihong Wang, David C Steffens, Rong Wu, Guy G Potter, Mingxia Liu","doi":"10.1007/978-3-031-21014-3_27","DOIUrl":"https://doi.org/10.1007/978-3-031-21014-3_27","url":null,"abstract":"<p><p>Previous studies have shown that late-life depression (LLD) may be a precursor of neurodegenerative diseases and may increase the risk of dementia. At present, the pathological relationship between LLD and dementia, in particularly Alzheimer's disease (AD) is unclear. Structural MRI (sMRI) can provide objective biomarkers for the computer-aided diagnosis of LLD and AD, providing a promising solution to understand the clinical progression of brain disorders. But few studies have focused on sMRI-based predictive analysis of clinical progression from LLD to AD. In this paper, we develop a deep learning method to predict the clinical progression of LLD to AD up to 5 years after baseline time using T1-weighted structural MRIs. We also analyze several important factors that limit the diagnostic performance of learning-based methods, including data imbalance, small-sample-size, and multi-site data heterogeneity, by leveraging a relatively large-scale database to aid model training. Experimental results on 308 subjects with sMRIs acquired from 2 imaging sites and the publicly available ADNI database demonstrate the potential of deep learning in predicting the clinical progression of LLD to AD. To the best of our knowledge, this is among the first attempts to explore the complex pathophysiological relationship between LLD and AD based on structural MRI using a deep learning method.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"13583 ","pages":"259-268"},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9805302/pdf/nihms-1859375.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9838060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mingquan Lin, Lei Liu, Mae Gorden, Michael Kass, Sarah Van Tassel, Fei Wang, Yifan Peng
{"title":"Multi-scale Multi-structure Siamese Network (MMSNet) for Primary Open-Angle Glaucoma Prediction.","authors":"Mingquan Lin, Lei Liu, Mae Gorden, Michael Kass, Sarah Van Tassel, Fei Wang, Yifan Peng","doi":"10.1007/978-3-031-21014-3_45","DOIUrl":"10.1007/978-3-031-21014-3_45","url":null,"abstract":"<p><p>Primary open-angle glaucoma (POAG) is one of the leading causes of irreversible blindness in the United States and worldwide. POAG prediction before onset plays an important role in early treatment. Although deep learning methods have been proposed to predict POAG, these methods mainly focus on current status prediction. In addition, all these methods used a single image as input. On the other hand, glaucoma specialists determine a glaucomatous eye by comparing the follow-up optic nerve image with the baseline along with supplementary clinical data. To simulate this process, we proposed a Multi-scale Multi-structure Siamese Network (MMSNet) to predict future POAG event from fundus photographs. The MMSNet consists of two side-outputs for deep supervision and 2D blocks to utilize two-dimensional features to assist classification. The MMSNet network was trained and evaluated on a large dataset: 37,339 fundus photographs from 1,636 Ocular Hypertension Treatment Study (OHTS) participants. Extensive experiments show that MMSNet outperforms the state-of-the-art on two \"POAG prediction before onset\" tasks. Our AUC are 0.9312 and 0.9507, which are 0.2204 and 0.1490 higher than the state-of-the-art, respectively. In addition, an ablation study is performed to check the contribution of different components. These results highlight the potential of deep learning to assist and enhance the prediction of future POAG event. The proposed network will be publicly available on https://github.com/bionlplab/MMSNet.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"13583 ","pages":"436-445"},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9844668/pdf/nihms-1864372.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10604661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}