{"title":"TMAN: A Triple Morphological Feature Attention Network for Fine-Grained Classification of Breast Ultrasound Images.","authors":"Dongyue Wang, Min Xue, Hui Wang","doi":"10.1007/s10278-025-01496-5","DOIUrl":"https://doi.org/10.1007/s10278-025-01496-5","url":null,"abstract":"<p><p>Accurately diagnosing various types of breast lesions is critical for assessing breast cancer risk and predicting patient outcomes, which necessitates a fine-grained classification approach. While convolutional neural networks (CNNs) are predominantly employed in fine-grained classification tasks for breast lesions, they often struggle to effectively capture and model the intricate relationships between local and global features, an aspect that is vital for achieving high classification accuracy. Additionally, Color Doppler Flow Imaging (CDFI) and Strain Elastography (SE) are two important ultrasound imaging techniques widely used in the diagnosis of breast lesions. However, their specific contributions to fine-grained classification have not been thoroughly investigated. In this paper, we introduce a Triple Morphological Feature Attention Network (TMAN) designed to enhance fine-grained classification of breast ultrasound images. The TMAN architecture comprises three key modules: Local Margin Attention (LMA), Structured Texture Attention (STA), and Fusion Attention (FA), each focused on extracting distinct morphological features. TMAN achieved an average accuracy of 74.40%, precision of 73.18%, and specificity of 96.02%, surpassing state-of-the-art methods. The findings reveal that incorporating CDFI significantly improved classification for malignant subtypes with a 10% accuracy boost, while SE had a negligible impact. These findings highlight the effectiveness of TMAN in extracting nuanced morphological features and advancing precision in breast ultrasound diagnosis. The source code is accessible at https://github.com/windywindyw/TMAN .</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143813390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hanyue Mo, Ziwen Kuang, Haoxuan Wang, Xinyi Cai, Kun Cheng
{"title":"Enhancing Burn Diagnosis through SE-ResNet18 and Confidence Filtering.","authors":"Hanyue Mo, Ziwen Kuang, Haoxuan Wang, Xinyi Cai, Kun Cheng","doi":"10.1007/s10278-025-01495-6","DOIUrl":"https://doi.org/10.1007/s10278-025-01495-6","url":null,"abstract":"<p><p>Accurate classification of burn severity is crucial for effective clinical treatment; however, existing methods often fail to balance precision and real-time performance. To address this challenge, we propose a deep learning-based approach utilizing an enhanced ResNet18 architecture with integrated attention mechanisms to improve classification accuracy. The system consists of data preprocessing, classification, optimization, and post-processing modules. The optimization strategy employs an adaptive learning rate combining cosine annealing and class-specific gradient adaptation, alongside targeted adjustments for class imbalance, while an improved Adam optimizer enhances convergence stability. Post-processing incorporates confidence filtering (threshold 0.3) and selective evaluation, with weighted aggregation-integrating dynamic accuracy calculation and moving average to refine predictions and enhance diagnostic reliability. Experimental results on a burn skin test dataset demonstrate that the proposed model achieves a classification accuracy of 99.19% ± 0.12 and a mean average precision (mAP) of 98.72% ± 0.10, highlighting its potential for real-time clinical burn assessment.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143813386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How Do Radiologists Currently Monitor AI in Radiology and What Challenges Do They Face? An Interview Study and Qualitative Analysis.","authors":"Jamie Chow, Ryan Lee, Honghan Wu","doi":"10.1007/s10278-025-01493-8","DOIUrl":"https://doi.org/10.1007/s10278-025-01493-8","url":null,"abstract":"<p><p>Artificial intelligence (AI) in radiology is becoming increasingly prevalent; however, there is not a clear picture of how AI is being monitored today and how this should practically be done given the inherent risk of AI model performance degradation over time. This research investigates current practices and what difficulties radiologists face in monitoring AI. Semi-structured virtual interviews were conducted with 6 USA and 10 Europe-based radiologists. The interviews were automatically transcribed and underwent thematic analysis. The findings suggest that AI monitoring in radiology is still relatively nascent as most of the AI projects had not yet progressed into a fully live clinical deployment. The most common method of monitoring involved a manual process of retrospectively comparing the AI results against the radiology report. Automated and statistical methods of monitoring were much less common. The biggest challenges are a lack of resources to support AI monitoring and uncertainty about how to create a robust and scalable process of monitoring the breadth and variety of radiology AI applications available. There is currently a lack of practical guidelines on how to monitor AI which has led to a variety of approaches being proposed from both healthcare providers and vendors. An ensemble of mixed methods is recommended to monitor AI across multiple domains and metrics. This will be enabled by appropriate allocation of resources and the formation of robust and diverse multidisciplinary AI governance groups.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143813388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Niloufar Eghbali, Chad Klochko, Zaid Mahdi, Laith Alhiari, Jonathan Lee, Beatrice Knisely, Joseph Craig, Mohammad M Ghassemi
{"title":"Enhancing Radiology Clinical Histories Through Transformer-Based Automated Clinical Note Summarization.","authors":"Niloufar Eghbali, Chad Klochko, Zaid Mahdi, Laith Alhiari, Jonathan Lee, Beatrice Knisely, Joseph Craig, Mohammad M Ghassemi","doi":"10.1007/s10278-025-01477-8","DOIUrl":"https://doi.org/10.1007/s10278-025-01477-8","url":null,"abstract":"<p><p>Insufficient clinical information provided in radiology requests, coupled with the cumbersome nature of electronic health records (EHRs), poses significant challenges for radiologists in extracting pertinent clinical data and compiling detailed radiology reports. Considering the challenges and time involved in navigating electronic medical records (EMR), an automated method to accurately compress the text while maintaining key semantic information could significantly enhance the efficiency of radiologists' workflow. The purpose of this study is to develop and demonstrate an automated tool for clinical note summarization with the goal of extracting the most pertinent clinical information for the radiological assessments. We adopted a transfer learning methodology from the natural language processing domain to fine-tune a transformer model for abstracting clinical reports. We employed a dataset consisting of 1000 clinical notes from 970 patients who underwent knee MRI, all manually summarized by radiologists. The fine-tuning process involved a two-stage approach starting with self-supervised denoising and then focusing on the summarization task. The model successfully condensed clinical notes by 97% while aligning closely with radiologist-written summaries evidenced by a 0.9 cosine similarity and a ROUGE-1 score of 40.18. In addition, statistical analysis, indicated by a Fleiss kappa score of 0.32, demonstrated fair agreement among specialists on the model's effectiveness in producing more relevant clinical histories compared to those included in the exam requests. The proposed model effectively summarized clinical notes for knee MRI studies, thereby demonstrating potential for improving radiology reporting efficiency and accuracy.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143805120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Advancing Visual Perception Through VCANet-Crossover Osprey Algorithm: Integrating Visual Technologies.","authors":"Yuwen Ning, Jiaxin Li, Shuyi Sun","doi":"10.1007/s10278-025-01467-w","DOIUrl":"https://doi.org/10.1007/s10278-025-01467-w","url":null,"abstract":"<p><p>Diabetic retinopathy (DR) is a significant vision-threatening condition, necessitating accurate and efficient automated screening methods. Traditional deep learning (DL) models struggle to detect subtle lesions and also suffer from high computational complexity. Existing models primarily mimic the primary visual cortex (V1) of the human visual system, neglecting other higher-order processing regions. To overcome these limitations, this research introduces the vision core-adapted network-based crossover osprey algorithm (VCANet-COP) for subtle lesion recognition with better computational efficiency. The model integrates sparse autoencoders (SAEs) to extract vascular structures and lesion-specific features at a pixel level for improved abnormality detection. The front-end network in the VCANet emulates the V1, V2, V4, and inferotemporal (IT) regions to derive subtle lesions effectively and improve lesion detection accuracy. Additionally, the COP algorithm leveraging the osprey optimization algorithm (OOA) with a crossover strategy optimizes hyperparameters and network configurations to ensure better computational efficiency, faster convergence, and enhanced performance in lesion recognition. The experimental assessment of the VCANet-COP model on multiple DR datasets namely Diabetic_Retinopathy_Data (DR-Data), Structured Analysis of the Retina (STARE) dataset, Indian Diabetic Retinopathy Image Dataset (IDRiD), Digital Retinal Images for Vessel Extraction (DRIVE) dataset, and Retinal fundus multi-disease image dataset (RFMID) demonstrates superior performance over baseline works, namely EDLDR, FFU_Net, LSTM_MFORG, fundus-DeepNet, and CNN_SVD by achieving average outcomes of 98.14% accuracy, 97.9% sensitivity, 98.08% specificity, 98.4% precision, 98.1% F1-score, 96.2% kappa coefficient, 2.0% false positive rate (FPR), 2.1% false negative rate (FNR), and 1.5-s execution time. By addressing critical limitations, VCANet-COP provides a scalable and robust solution for real-world DR screening and clinical decision support.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143782301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating the Impact of a Ki-67 Decision Support Algorithm on Pathology Residents' Scoring Accuracy.","authors":"Mine İlayda Şengör Aygün, Özben Yalçın, Burak Uzel, Gamze Kulduk, Cem Çomunoğlu","doi":"10.1007/s10278-025-01490-x","DOIUrl":"https://doi.org/10.1007/s10278-025-01490-x","url":null,"abstract":"<p><p>Ki-67 scoring is of essential importance in the evaluation of breast cancer. We evaluated a Ki-67 algorithm as a decision support tool to improve accuracy for pathology residents. We retrospectively evaluated Ki-67 scores on whole slide images (WSI) obtained from 156 consecutive breast cancer patients. Two senior pathologists determined the 2.1 mm<sup>2</sup> hotspot to be evaluated. Ki-67 scores from senior pathologists were compared with results generated by the algorithm, results from 10 pathology residents, and results from pathology residents with the assistance of the algorithm. In addition to numerical results from the algorithm, residents were also presented with a visual representation of nuclei that were counted and excluded. Statistical analysis was performed using Wilcoxon and intra-class correlation (ICC) tests. The mean Ki-67 scores from senior pathologists and the algorithm were 23 ± 18 and 24 ± 18, respectively (ICC, 0.98). Ki-67 scores from the residents were 19 ± 16 and 22 ± 16, without and with input from the algorithm, respectively. With input from the algorithm, residents' scores were significantly closer to those obtained by senior pathologists (p = 0.008). Residents modified their scores in 53.8% of the cases where 74% of the better scores were characterized by an increase in the original scores. The results obtained by the Ki-67 algorithm were highly correlated with those assessed by senior pathologists. We demonstrated that the algorithm may serve as a decision support tool for residents to align their results with those of senior pathologists.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143782304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Prediction of Future Risk of Moderate to Severe Kidney Function Loss Using a Deep Learning Model-Enabled Chest Radiography.","authors":"Kai-Chieh Chen, Shang-Yang Lee, Dung-Jang Tsai, Kai-Hsiung Ko, Yi-Chih Hsu, Wei-Chou Chang, Wen-Hui Fang, Chin Lin, Yu-Juei Hsu","doi":"10.1007/s10278-025-01489-4","DOIUrl":"https://doi.org/10.1007/s10278-025-01489-4","url":null,"abstract":"<p><p>Chronic kidney disease (CKD) remains a major public health concern, requiring better predictive models for early intervention. This study evaluates a deep learning model (DLM) that utilizes raw chest X-ray (CXR) data to predict moderate to severe kidney function decline. We analyzed data from 79,219 patients with an estimated Glomerular Filtration Rate (eGFR) between 65 and 120, segmented into development (n = 37,983), tuning (n = 15,346), internal validation (n = 14,113), and external validation (n = 11,777) sets. Our DLM, pretrained on CXR-report pairs, was fine-tuned with the development set. We retrospectively examined data spanning April 2011 to February 2022, with a 5-year maximum follow-up. Primary and secondary endpoints included CKD stage 3b progression, ESRD/dialysis, and mortality. The overall concordance index (C-index) values for the internal and external validation sets were 0.903 (95% CI, 0.885-0.922) and 0.851 (95% CI, 0.819-0.883), respectively. In these sets, the incidences of progression to CKD stage 3b at 5 years were 19.2% and 13.4% in the high-risk group, significantly higher than those in the median-risk (5.9% and 5.1%) and low-risk groups (0.9% and 0.9%), respectively. The sex, age, and eGFR-adjusted hazard ratios (HR) for the high-risk group compared to the low-risk group were 16.88 (95% CI, 10.84-26.28) and 7.77 (95% CI, 4.77-12.64), respectively. The high-risk group also exhibited higher probabilities of progressing to ESRD/dialysis or experiencing mortality compared to the low-risk group. Further analysis revealed that the high-risk group compared to the low/median-risk group had a higher prevalence of complications and abnormal blood/urine markers. Our findings demonstrate that a DLM utilizing CXR can effectively predict CKD stage 3b progression, offering a potential tool for early intervention in high-risk populations.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143775244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Breast Ultrasound Image Segmentation Using Multi-branch Skip Connection Search.","authors":"Yue Wu, Lin Huang, Tiejun Yang","doi":"10.1007/s10278-025-01487-6","DOIUrl":"https://doi.org/10.1007/s10278-025-01487-6","url":null,"abstract":"<p><p>To reduce the cost of designing neural networks and improve the accuracy of breast ultrasound image segmentation, an encoder-decoder neural network architecture search method is proposed, tailored for constructing segmentation models automatically. Initially, a multi-branch skip connection module is designed in which each branch utilizes distinct operations to extract features of varying scales and types from subsets of channels. Subsequently, a learnable operation weight search strategy is introduced that employs Gumbel-Softmax for reparameterizing discrete operation weights. This strategy explores optimal operations within the multi-branch skip connection module through both shared and non-shared methodologies. The candidate neural networks incorporate encoder-decoder block pairs that utilize the Swin Transformer from Swin-Unet and convolutional blocks from TransUNet, respectively. Experimental results demonstrate that the method identifies the optimal encoder-decoder model in approximately two hours. The automatically constructed model achieves superior segmentation accuracy, with Dice scores of approximately 85.94% and 84.44% on the BUSI and OASBUD datasets, respectively. It outperforms state-of-the-art (SOTA) methods such as AAU-Net, SK-U-Net, and TransUNet. High-precision segmentation results offer clear localization of lesion boundaries, thereby reducing the risk of missed diagnoses. The model's quantitative metrics, such as lesion area and morphology, can be seamlessly incorporated into diagnostic reports, facilitating the development of personalized treatment plans.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143775243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuqin Min, Jing Li, Shouqiang Jia, Yuehua Li, Shengdong Nie
{"title":"Automated Cerebrovascular Segmentation and Visualization of Intracranial Time-of-Flight Magnetic Resonance Angiography Based on Deep Learning.","authors":"Yuqin Min, Jing Li, Shouqiang Jia, Yuehua Li, Shengdong Nie","doi":"10.1007/s10278-024-01215-6","DOIUrl":"10.1007/s10278-024-01215-6","url":null,"abstract":"<p><p>Time-of-flight magnetic resonance angiography (TOF-MRA) is a non-contrast technique used to visualize neurovascular. However, manual reconstruction of the volume render (VR) by radiologists is time-consuming and labor-intensive. Deep learning-based (DL-based) vessel segmentation technology may provide intelligent automation workflow. To evaluate the image quality of DL vessel segmentation for automatically acquiring intracranial arteries in TOF-MRA. A total of 394 TOF-MRA scans were selected, which included cerebral vascular health, aneurysms, or stenoses. Both our proposed method and two state-of-the-art DL methods are evaluated on external datasets for generalization ability. For qualitative assessment, two experienced clinical radiologists evaluated the image quality of cerebrovascular diagnostic and visualization (scoring 0-5 as unacceptable to excellent) obtained by manual VR reconstruction or automatic convolutional neural network (CNN) segmentation. The proposed CNN outperforms the other two DL-based methods in clinical scoring on external datasets, and its visualization was evaluated by readers as having the appearance of the radiologists' manual reconstructions. Scoring of proposed CNN and VR of intracranial arteries demonstrated good to excellent agreement with no significant differences (median, 5.0 and 5.0, P ≥ 12) at healthy-type scans. All proposed CNN image quality were considered to have adequate diagnostic quality (median scores > 2). Quantitative analysis demonstrated a superior dice similarity coefficient of cerebrovascular overlap (training sets and validation sets; 0.947 and 0.927). Automatic cerebrovascular segmentation using DL is feasible and the image quality in terms of vessel integrity, collateral circulation and lesion morphology is comparable to expert manual VR without significant differences.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"703-716"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950609/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141918551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rguibi Zakaria, Hajami Abdelmajid, Zitouni Dya, Allali Hakim
{"title":"PelviNet: A Collaborative Multi-agent Convolutional Network for Enhanced Pelvic Image Registration.","authors":"Rguibi Zakaria, Hajami Abdelmajid, Zitouni Dya, Allali Hakim","doi":"10.1007/s10278-024-01249-w","DOIUrl":"10.1007/s10278-024-01249-w","url":null,"abstract":"<p><p>PelviNet introduces a groundbreaking multi-agent convolutional network architecture tailored for enhancing pelvic image registration. This innovative framework leverages shared convolutional layers, enabling synchronized learning among agents and ensuring an exhaustive analysis of intricate 3D pelvic structures. The architecture combines max pooling, parametric ReLU activations, and agent-specific layers to optimize both individual and collective decision-making processes. A communication mechanism efficiently aggregates outputs from these shared layers, enabling agents to make well-informed decisions by harnessing combined intelligence. PelviNet's evaluation centers on both quantitative accuracy metrics and visual representations to elucidate agents' performance in pinpointing optimal landmarks. Empirical results demonstrate PelviNet's superiority over traditional methods, achieving an average image-wise error of 2.8 mm, a subject-wise error of 3.2 mm, and a mean Euclidean distance error of 3.0 mm. These quantitative results highlight the model's efficiency and precision in landmark identification, crucial for medical contexts such as radiation therapy, where exact landmark identification significantly influences treatment outcomes. By reliably identifying critical structures, PelviNet advances pelvic image analysis and offers potential enhancements for broader medical imaging applications, marking a significant step forward in computational healthcare.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"957-966"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950488/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142157124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}