{"title":"Multi-Class Classification of Breast Cancer Subtypes Using ResNet Architectures on Histopathological Images.","authors":"Akshat Desai, Rakeshkumar Mahto","doi":"10.3390/jimaging11080284","DOIUrl":"10.3390/jimaging11080284","url":null,"abstract":"<p><p>Breast cancer is a significant cause of cancer-related mortality among women around the globe, underscoring the need for early and accurate diagnosis. Typically, histopathological analysis of biopsy slides is utilized for tumor classification. However, it is labor-intensive, subjective, and often affected by inter-observer variability. Therefore, this study explores a deep learning-based, multi-class classification framework for distinguishing breast cancer subtypes using convolutional neural networks (CNNs). Unlike previous work using the popular BreaKHis dataset, where binary classification models were applied, in this work, we differentiate eight histopathological subtypes: four benign (adenosis, fibroadenoma, phyllodes tumor, and tubular adenoma) and four malignant (ductal carcinoma, lobular carcinoma, mucinous carcinoma, and papillary carcinoma). This work leverages transfer learning with ImageNet-pretrained ResNet architectures (ResNet-18, ResNet-34, and ResNet-50) and extensive data augmentation to enhance classification accuracy and robustness across magnifications. Among the ResNet models, ResNet-50 achieved the best performance, attaining a maximum accuracy of 92.42%, an AUC-ROC of 99.86%, and an average specificity of 98.61%. These findings validate the combined effectiveness of CNNs and transfer learning in capturing fine-grained histopathological features required for accurate breast cancer subtype classification.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387189/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From Detection to Diagnosis: An Advanced Transfer Learning Pipeline Using YOLO11 with Morphological Post-Processing for Brain Tumor Analysis for MRI Images.","authors":"Ikram Chourib","doi":"10.3390/jimaging11080282","DOIUrl":"10.3390/jimaging11080282","url":null,"abstract":"<p><p>Accurate and timely detection of brain tumors from magnetic resonance imaging (MRI) scans is critical for improving patient outcomes and informing therapeutic decision-making. However, the complex heterogeneity of tumor morphology, scarcity of annotated medical data, and computational demands of deep learning models present substantial challenges for developing reliable automated diagnostic systems. In this study, we propose a robust and scalable deep learning framework for brain tumor detection and classification, built upon an enhanced YOLO-v11 architecture combined with a two-stage transfer learning strategy. The first stage involves training a base model on a large, diverse MRI dataset. Upon achieving a mean Average Precision (mAP) exceeding 90%, this model is designated as the Brain Tumor Detection Model (BTDM). In the second stage, the BTDM is fine-tuned on a structurally similar but smaller dataset to form Brain Tumor Detection and Segmentation (BTDS), effectively leveraging domain transfer to maintain performance despite limited data. The model is further optimized through domain-specific data augmentation-including geometric transformations-to improve generalization and robustness. Experimental evaluations on publicly available datasets show that the framework achieves high mAP@0.5 scores (up to 93.5% for the BTDM and 91% for BTDS) and consistently outperforms existing state-of-the-art methods across multiple tumor types, including glioma, meningioma, and pituitary tumors. In addition, a post-processing module enhances interpretability by generating segmentation masks and extracting clinically relevant metrics such as tumor size and severity level. These results underscore the potential of our approach as a high-performance, interpretable, and deployable clinical decision-support tool, contributing to the advancement of intelligent real-time neuro-oncological diagnostics.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387851/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Spectrogram Learning for Gunshot Classification: A Comparative Study of CNN Architectures and Time-Frequency Representations.","authors":"Pafan Doungpaisan, Peerapol Khunarsa","doi":"10.3390/jimaging11080281","DOIUrl":"10.3390/jimaging11080281","url":null,"abstract":"<p><p>Gunshot sound classification plays a crucial role in public safety, forensic investigations, and intelligent surveillance systems. This study evaluates the performance of deep learning models in classifying firearm sounds by analyzing twelve time-frequency spectrogram representations, including Mel, Bark, MFCC, CQT, Cochleagram, STFT, FFT, Reassigned, Chroma, Spectral Contrast, and Wavelet. The dataset consists of 2148 gunshot recordings from four firearm types, collected in a semi-controlled outdoor environment under multi-orientation conditions. To leverage advanced computer vision techniques, all spectrograms were converted into RGB images using perceptually informed colormaps. This enabled the application of image processing approaches and fine-tuning of pre-trained Convolutional Neural Networks (CNNs) originally developed for natural image classification. Six CNN architectures-ResNet18, ResNet50, ResNet101, GoogLeNet, Inception-v3, and InceptionResNetV2-were trained on these spectrogram images. Experimental results indicate that CQT, Cochleagram, and Mel spectrograms consistently achieved high classification accuracy, exceeding 94% when paired with deep CNNs such as ResNet101 and InceptionResNetV2. These findings demonstrate that transforming time-frequency features into RGB images not only facilitates the use of image-based processing but also allows deep models to capture rich spectral-temporal patterns, providing a robust framework for accurate firearm sound classification.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387842/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Williams Ayivi, Xiaoling Zhang, Wisdom Xornam Ativi, Francis Sam, Franck A P Kouassi
{"title":"Dynamic-Attentive Pooling Networks: A Hybrid Lightweight Deep Model for Lung Cancer Classification.","authors":"Williams Ayivi, Xiaoling Zhang, Wisdom Xornam Ativi, Francis Sam, Franck A P Kouassi","doi":"10.3390/jimaging11080283","DOIUrl":"10.3390/jimaging11080283","url":null,"abstract":"<p><p>Lung cancer is one of the leading causes of cancer-related mortality worldwide. The diagnosis of this disease remains a challenge due to the subtle and ambiguous nature of early-stage symptoms and imaging findings. Deep learning approaches, specifically Convolutional Neural Networks (CNNs), have significantly advanced medical image analysis. However, conventional architectures such as ResNet50 that rely on first-order pooling often fall short. This study aims to overcome the limitations of CNNs in lung cancer classification by proposing a novel and dynamic model named LungSE-SOP. The model is based on Second-Order Pooling (SOP) and Squeeze-and-Excitation Networks (SENet) within a ResNet50 backbone to improve feature representation and class separation. A novel Dynamic Feature Enhancement (DFE) module is also introduced, which dynamically adjusts the flow of information through SOP and SENet blocks based on learned importance scores. The model was trained using a publicly available IQ-OTH/NCCD lung cancer dataset. The performance of the model was assessed using various metrics, including the accuracy, precision, recall, F1-score, ROC curves, and confidence intervals. For multiclass tumor classification, our model achieved 98.6% accuracy for benign, 98.7% for malignant, and 99.9% for normal cases. Corresponding F1-scores were 99.2%, 99.8%, and 99.9%, respectively, reflecting the model's high precision and recall across all tumor types and its strong potential for clinical deployment.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387460/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fundus Image-Based Eye Disease Detection Using EfficientNetB3 Architecture.","authors":"Rahaf Alsohemi, Samia Dardouri","doi":"10.3390/jimaging11080279","DOIUrl":"10.3390/jimaging11080279","url":null,"abstract":"<p><p>Accurate and early classification of retinal diseases such as diabetic retinopathy, cataract, and glaucoma is essential for preventing vision loss and improving clinical outcomes. Manual diagnosis from fundus images is often time-consuming and error-prone, motivating the development of automated solutions. This study proposes a deep learning-based classification model using a pretrained EfficientNetB3 architecture, fine-tuned on a publicly available Kaggle retinal image dataset. The model categorizes images into four classes: cataract, diabetic retinopathy, glaucoma, and healthy. Key enhancements include transfer learning, data augmentation, and optimization via the Adam optimizer with a cosine annealing scheduler. The proposed model achieved a classification accuracy of 95.12%, with a precision of 95.21%, recall of 94.88%, F1-score of 95.00%, Dice Score of 94.91%, Jaccard Index of 91.2%, and an MCC of 0.925. These results demonstrate the model's robustness and potential to support automated retinal disease diagnosis in clinical settings.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387119/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amir A Amanullah, Melika Mirbod, Aarti Pandey, Shashi B Singh, Om H Gandhi, Cyrus Ayubcha
{"title":"The Genetics of Amyloid Deposition: A Systematic Review of Genome-Wide Association Studies Using Amyloid PET Imaging in Alzheimer's Disease.","authors":"Amir A Amanullah, Melika Mirbod, Aarti Pandey, Shashi B Singh, Om H Gandhi, Cyrus Ayubcha","doi":"10.3390/jimaging11080280","DOIUrl":"10.3390/jimaging11080280","url":null,"abstract":"<p><p>Positron emission tomography (PET) has become a powerful tool in Alzheimer's disease (AD) research by enabling in vivo visualization of pathological biomarkers. Recent efforts have aimed to integrate PET-derived imaging phenotypes with genome-wide association studies (GWASs) to better elucidate the genetic architecture underlying AD. This systematic review examines studies that leverage PET imaging in the context of GWASs (PET-GWASs) to identify genetic variants associated with disease risk, progression, and brain region-specific pathology. A comprehensive search of PubMed and Embase databases was performed on 18 February 2025, yielding 210 articles, of which 10 met pre-defined inclusion criteria and were included in the final synthesis. Studies were eligible if they included AD populations, employed PET imaging alongside GWASs, and reported original full-text findings in English. No formal protocol was registered, and the risk of bias was not independently assessed. The included studies consistently identified <i>APOE</i> as the strongest genetic determinant of amyloid burden, while revealing additional significant loci including ABCA7 (involved in lipid metabolism and amyloid clearance), <i>FERMT2</i> (cell adhesion), <i>CR1</i> (immune response), TOMM40 (mitochondrial function), and <i>FGL2</i> (protective against amyloid deposition in Korean populations). The included studies suggest that PET-GWAS approaches can uncover genetic loci involved in processes such as lipid metabolism, immune response, and synaptic regulation. Despite limitations including modest cohort sizes and methodological variability, this integrated approach offers valuable insight into the biological pathways driving AD pathology. Expanding PET-genomic datasets, improving study power, and applying advanced computational tools may further clarify genetic mechanisms and contribute to precision medicine efforts in AD.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387344/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automated Task-Transfer Function Measurement for CT Image Quality Assessment Based on AAPM TG 233.","authors":"Choirul Anam, Riska Amilia, Ariij Naufal, Eko Hidayanto, Heri Sutanto, Lukmanda E Lubis, Toshioh Fujibuchi, Geoff Dougherty","doi":"10.3390/jimaging11080277","DOIUrl":"10.3390/jimaging11080277","url":null,"abstract":"<p><p>This study aims to develop and validate software for the automatic measurement of the task-transfer function (TTF) based on the American Association of Physicists in Medicine (AAPM) Task Group (TG) 233. The software consists of two main stages: automatic placement of the region of interest (ROI) within circular objects of the phantoms and calculating the TTF. The software was developed on four CT phantom types: computational phantom, ACR 464 CT phantom, AAPM CT phantom, and Catphan<sup>®</sup> 604 phantom. Each phantom was tested with varying parameters, including spatial resolution level, slice thickness, and image reconstruction technique. The results of TTF were compared with manual measurements performed using ImQuest version 7.3.01 and iQmetix-CT version v1.2. The software successfully located ROIs at all circular objects within each phantom and measured accurate TTF with various contrast-to-noise ratios (CNRs) of all phantoms. The TTF results were comparable to those obtained with ImQuest and iQmetrix-CT. It was found that the TTF curves produced by the software are smoother than those produced by ImQuest. An algorithm for the automated measurement of TTF was successfully developed and validated. TTF measurement with our software is highly user-friendly, requiring only a single click from the user.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387721/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Afraz Danish Ali Qureshi, Hassaan Malik, Ahmad Naeem, Syeda Nida Hassan, Daesik Jeong, Rizwan Ali Naqvi
{"title":"ODDM: Integration of SMOTE Tomek with Deep Learning on Imbalanced Color Fundus Images for Classification of Several Ocular Diseases.","authors":"Afraz Danish Ali Qureshi, Hassaan Malik, Ahmad Naeem, Syeda Nida Hassan, Daesik Jeong, Rizwan Ali Naqvi","doi":"10.3390/jimaging11080278","DOIUrl":"10.3390/jimaging11080278","url":null,"abstract":"<p><p>Ocular disease (OD) represents a complex medical condition affecting humans. OD diagnosis is a challenging process in the current medical system, and blindness may occur if the disease is not detected at its initial phase. Recent studies showed significant outcomes in the identification of OD using deep learning (DL) models. Thus, this work aims to develop a multi-classification DL-based model for the classification of seven ODs, including normal (NOR), age-related macular degeneration (AMD), diabetic retinopathy (DR), glaucoma (GLU), maculopathy (MAC), non-proliferative diabetic retinopathy (NPDR), and proliferative diabetic retinopathy (PDR), using color fundus images (CFIs). This work proposes a custom model named the ocular disease detection model (ODDM) based on a CNN. The proposed ODDM is trained and tested on a publicly available ocular disease dataset (ODD). Additionally, the SMOTE Tomek (SM-TOM) approach is also used to handle the imbalanced distribution of the OD images in the ODD. The performance of the ODDM is compared with seven baseline models, including DenseNet-201 (R<sub>1</sub>), EfficientNet-B0 (R<sub>2</sub>), Inception-V3 (R<sub>3</sub>), MobileNet (R<sub>4</sub>), Vgg-16 (R<sub>5</sub>), Vgg-19 (R<sub>6</sub>), and ResNet-50 (R<sub>7</sub>). The proposed ODDM obtained a 98.94% AUC, along with 97.19% accuracy, a recall of 88.74%, a precision of 95.23%, and an F1-score of 88.31% in classifying the seven different types of OD. Furthermore, ANOVA and Tukey HSD (Honestly Significant Difference) post hoc tests are also applied to represent the statistical significance of the proposed ODDM. Thus, this study concludes that the results of the proposed ODDM are superior to those of baseline models and state-of-the-art models.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387618/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Antonio Malvasi, Lorenzo E Malgieri, Michael Stark, Edoardo Di Naro, Dan Farine, Giorgio Maria Baldini, Miriam Dellino, Murat Yassa, Andrea Tinelli, Antonella Vimercati, Tommaso Difonzo
{"title":"The Contribution of AIDA (Artificial Intelligence Dystocia Algorithm) to Cesarean Section Within Robson Classification Group.","authors":"Antonio Malvasi, Lorenzo E Malgieri, Michael Stark, Edoardo Di Naro, Dan Farine, Giorgio Maria Baldini, Miriam Dellino, Murat Yassa, Andrea Tinelli, Antonella Vimercati, Tommaso Difonzo","doi":"10.3390/jimaging11080276","DOIUrl":"10.3390/jimaging11080276","url":null,"abstract":"<p><p>Global cesarean section (CS) rates continue to rise, with the Robson classification widely used for analysis. However, Robson Group 2A patients (nulliparous women with induced labor) show disproportionately high CS rates that cannot be fully explained by demographic factors alone. This study explored how the Artificial Intelligence Dystocia Algorithm (AIDA) could enhance the Robson system by providing detailed information on geometric dystocia, thereby facilitating better understanding of factors contributing to CS and developing more targeted reduction strategies. The authors conducted a comprehensive literature review analyzing both classification systems across multiple databases and developed a theoretical framework for integration. AIDA categorized labor cases into five classes (0-4) by analyzing four key geometric parameters measured through intrapartum ultrasound: angle of progression (AoP), asynclitism degree (AD), head-symphysis distance (HSD), and midline angle (MLA). Significant asynclitism (AD ≥ 7.0 mm) was strongly associated with CS regardless of other parameters, potentially explaining many \"failure to progress\" cases in Robson Group 2A patients. The proposed integration created a combined classification providing both population-level and individual geometric risk assessment. The integration of AIDA with the Robson classification represented a potentially valuable advancement in CS risk assessment, combining population-level stratification with individual-level geometric assessment to enable more personalized obstetric care. Future validation studies across diverse settings are needed to establish clinical utility.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387988/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Learning-Based Nuclei Segmentation and Melanoma Detection in Skin Histopathological Image Using Test Image Augmentation and Ensemble Model.","authors":"Mohammadesmaeil Akbarpour, Hamed Fazlollahiaghamalek, Mahdi Barati, Mehrdad Hashemi Kamangar, Mrinal Mandal","doi":"10.3390/jimaging11080274","DOIUrl":"10.3390/jimaging11080274","url":null,"abstract":"<p><p>Histopathological images play a crucial role in diagnosing skin cancer. However, due to the very large size of digital histopathological images (typically in the order of billion pixels), manual image analysis is tedious and time-consuming. Therefore, there has been significant interest in developing Artificial Intelligence (AI)-enabled computer-aided diagnosis (CAD) techniques for skin cancer detection. Due to the diversity of uncertain cell boundaries, automated nuclei segmentation of histopathological images remains challenging. Automating the identification of abnormal cell nuclei and analyzing their distribution across multiple tissue sections can significantly expedite comprehensive diagnostic assessments. In this paper, a deep neural network (DNN)-based technique is proposed to segment nuclei and detect melanoma in histopathological images. To achieve a robust performance, a test image is first augmented by various geometric operations. The augmented images are then passed through the DNN and the individual outputs are combined to obtain the final nuclei-segmented image. A morphological technique is then applied on the nuclei-segmented image to detect the melanoma region in the image. Experimental results show that the proposed technique can achieve a Dice score of 91.61% and 87.9% for nuclei segmentation and melanoma detection, respectively.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387607/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}