{"title":"MSFFE-Net: Multi-scale Spatial-Frequency Feature Enhancement for accurate liver tumor segmentation","authors":"Jinlin Ma , Kai Zhang , Ziping Ma , Ke Lu","doi":"10.1016/j.bspc.2025.108963","DOIUrl":"10.1016/j.bspc.2025.108963","url":null,"abstract":"<div><div>Accurate liver tumor segmentation is a crucial aspect for early diagnosis and surgery, but existing segmentation methods struggle with tumor heterogeneity, unclear boundaries, and small lesions due to limited multi-scale feature fusion and spatial perception. To alleviate these issues, we propose MSFFE-Net, a novel segmentation network that imposes a Multi-scale Spatial-Frequency Feature Enhancement mechanism, with the objective of unifying spatial and frequency domains to enrich feature representational power. Moreover, a Spatial-Frequency Domain Fusion (SFDF) module is incorporated to unify Fourier features with a dual-branch encoder, where standard convolutions and Residual Dilated Convolutions (RDC) are jointly employed to enable multi-scale feature extraction and to enhance edge perception. In addition, a Multi-scale Semantic Enhancement (MSE) module is introduced at the bottleneck to model global context, and CBAM attention is integrated into the skip connections to further optimize feature aggregation. Extensive experiments on the LiTS_2017 and 3Dircadb datasets further validate the effectiveness of the proposed method, achieving Dice coefficients of 98.12% and 97.24% for liver segmentation, and 89.61% and 92.87% for tumor segmentation, respectively. Compared with mainstream approaches such as nnU-Net and TransUNet, our model delivers Dice gains of 0.07%, 2.57%, and 1.00%, 1.83% on complex tumor datasets. In addition, the architecture maintains a favorable trade-off between accuracy and efficiency, with only 17.46 MB of parameters and an inference speed of 68.74 FPS. Ablation studies validate the model’s effectiveness in complex boundary and small target segmentation, advancing intelligent liver cancer diagnosis with potential for other organs tumor segmentation.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"113 ","pages":"Article 108963"},"PeriodicalIF":4.9,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145322567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Review: Algorithmic advances in central serous chorioretinopathy OCT: From classification to segmentation","authors":"Yihan Zhu , Yanwu Xu , Weihua Yang","doi":"10.1016/j.bspc.2025.108876","DOIUrl":"10.1016/j.bspc.2025.108876","url":null,"abstract":"<div><div>Central serous chorioretinopathy (CSC) is a common fundus disease characterized by serous retinal detachment in the macular region, which significantly impacts patients’ visual function. In recent years, with the continuous development of deep learning and image processing algorithms, remarkable progress has been made in algorithmic research based on CSC OCT images, particularly with a surge of innovative work and outstanding achievements in the areas of classification and segmentation. Through a systematic review of 62 research papers on CSC OCT algorithm development, this article summarizes multi-class algorithms such as binary and three-way classification, as well as algorithm optimization methods. It also reviews segmentation techniques for structures including serous retinal detachment (SRD), pigment epithelial detachment (PED), retinal vasculature, and the choroidal layer. Progress in CSC prediction, assessment, and assisted analysis algorithms is also summarized. Furthermore, the transition from classification to segmentation in CSC OCT algorithms is analyzed, along with the challenges and limitations in this research field. This review aims to provide a comprehensive understanding of the current state and future directions of CSC OCT image algorithm research for investigators in this domain.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"113 ","pages":"Article 108876"},"PeriodicalIF":4.9,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145323381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Utility of non-dimensional feature analysis of the PPG signal for automated screening of myocardial infarction (MI)","authors":"Abhishek Chakraborty , Deboleena Sadhukhan , Madhuchhanda Mitra","doi":"10.1016/j.bspc.2025.108786","DOIUrl":"10.1016/j.bspc.2025.108786","url":null,"abstract":"<div><div>These days, the manifold wearable attributes of the photoplethysmogram (PPG) signal acquired via optical means have been proven to be successful for the primary and rapid detection of myocardial infarction (MI) conditions. However, the available, limited set of state-of-the-art PPG-based methods is mostly found to be flawed, either owing to their procedural intricacy, validation over insufficient datasets, or quantification of the outcome in a partial manner. In this research, MI-induced variation is indicated via a unique set of non-dimensional features extracted from the normalized PPG first derivative (FDPPG) segment without utilizing fiducial point detection. This simple set of extracted features that has been popularly used for machine fault diagnosis applications is, in fact, adopted in this research for the first time to categorize between normal and MI subjects via a simple logistic regression classifier. The robust and superior performance of the proposed method can be seen from its mean detection accuracy, sensitivity, and specificity of 97.58 %, 96.77 %, and 98.39 % tested on 62 normal and 62 admitted MI subjects. In view of the available up-to-date research, the methodological simplicity and superior classification accuracy of the proposed method present immense promises for suitable cardiac monitoring applications.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"113 ","pages":"Article 108786"},"PeriodicalIF":4.9,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145323081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multilevel thresholding segmentation of medical images using the Crested Porcupine Optimizer with Enhanced Solution Quality and Gaussian distribution: Applications to liver, COVID-19, and brain diseases","authors":"Amina Salhi , Manel Ayadi , F.M. Aldosari , Fahad Algarni , Atef Ismail , Marwa M. Emam","doi":"10.1016/j.bspc.2025.108847","DOIUrl":"10.1016/j.bspc.2025.108847","url":null,"abstract":"<div><div>Accurate liver, COVID-19, and brain disease diagnosis is crucial for effective medical treatment and improved patient outcomes. In Computer-Aided Diagnosis (CAD) systems, segmentation is the foundational step, which plays a pivotal role in accurately delineating regions of interest for subsequent analysis. Among various techniques, multilevel thresholding segmentation is a specialized approach for processing medical images. However, its computational complexity and challenges in achieving satisfactory segmentation results limit its widespread application. To address these issues, this paper proposes an Enhanced Crested Porcupine Optimizer (ECPO) tailored for multilevel thresholding in medical image segmentation. The ECPO integrates two novel strategies: Enhanced Solution Quality (ESQ) and Gaussian Distribution, improving the exploration and exploitation capabilities of the original Crested Porcupine Optimizer (CPO). The optimization performance of ECPO is rigorously evaluated on 12 classical benchmark functions using CEC’2022 test functions, demonstrating superior results compared to CPO and other state-of-the-art algorithms. Subsequently, the ECPO is applied to segmenting medical images from three datasets focusing on liver cancer, COVID-19, and brain diseases. Utilizing Otsu and Kapur methods. Experimental results indicate that ECPO achieves the best segmentation outcomes in terms of fitness values, Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Feature Similarity Index (FSIM). The experimental results reveal that ECPO achieves the most accurate and effective segmentation outcomes across all datasets, outperforming other competitive algorithms. These findings underscore the potential of ECPO as a robust and efficient solution to the multilevel thresholding segmentation challenges in medical imaging.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"113 ","pages":"Article 108847"},"PeriodicalIF":4.9,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145323128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient NetV2 for Lung Disease Diagnosis and Treatment by Harnessing the Synergy of Verifiable Convolutional Neural Network optimized with Nutcracker Optimizer Algorithm","authors":"C. Shyamala Kumari, K. Seethalakshmi","doi":"10.1016/j.bspc.2025.108725","DOIUrl":"10.1016/j.bspc.2025.108725","url":null,"abstract":"<div><div>The growing demand for advanced computer-aided diagnosis systems in medical imaging is increasing for accurately detecting diseases, like COVID-19, pneumonia, tuberculosis and lung nodules. In spite of advancements, a research gap remains in developing models that offer high precision and transparent decision-making. To address this, this paper proposes an EfficientNetV2 for Lung Disease Diagnosis and Treatment by Harnessing the Synergy of Verifiable Convolutional Neural Network optimized with Nutcracker Optimizer Algorithm (EN-LDD-VCNN). Here, the images taken from Chest CT-Scan dataset are used. Then High Accuracy Distributed Kalman Filter (HADKF) is used for image resizing and pixel normalization. Afterwards, the pre-processing images are given into the Verifiable Convolutional Neural Network with EfficientNetV2 (VCNN-ENetV2) for classifying lung diseases, like adenocarcinoma, large cell carcinoma (LCC), squamous cell carcinoma (SCC), normal. Finally, the Nutcracker Optimizer Algorithm (NOA) is employed to optimize the weight parameters of VCNN. The performance metrics like accuracy, precision, recall, error rate, Matthew’s Correlation Coefficient (MCC), ROC is analyzed. The proposed technique achieves 20.73%,13.79% and 16.47% higher accuracy; 14.44%, 34.28% and 24.14% higher MCC; 12.16%, 18.39% and 26.27% higher precision compared with the existing techniques: Deep learning-dependent method to diagnose lung cancer by using CT-scan images (DL-DLC-CTI), Attention Enhanced Inception NeXt dependent Hybrid Deep Learning Method for Lung Cancer Diagnosis (AEI-HDLM-LCD) and Detection with categorization of lung disorders utilizing machine learning and deep learning strategies for pneumonia and Covid-19 (DC-LD-PC-DL) respectively.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"113 ","pages":"Article 108725"},"PeriodicalIF":4.9,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145322871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A CNN-based classifier for detecting rhythm disorders, premature contractions, and conduction abnormalities from ECG","authors":"Sudhanshu Gaurhar , Anil Kumar Tiwari , Surender Deora","doi":"10.1016/j.bspc.2025.108834","DOIUrl":"10.1016/j.bspc.2025.108834","url":null,"abstract":"<div><div>Arrhythmia is a condition characterized by an irregular heart rhythm. An electrocardiogram (ECG) is a widely used technique employed for identifying arrhythmias, as it reveals the morphological changes in the ECG waveform associated with these irregularities. The aim of this work is to classify arrhythmias using a deep learning-based convolutional neural network (CNN) has been introduced that eliminates the need for manual feature extraction. The CNN architecture is carefully designed with a large kernel size to enhance its effectiveness in capturing relevant features. In addition, to this arrhythmia is further subdivided based on their physiological origin of rhythm disorder, premature contraction, and conduction disorder for better feature learning. In order to improve the model’s generalization capability, a dataset classifier is proposed for three publicly available datasets: Chapman, CPSC 2018, and TNMG, each with different class distributions. This classifier is designed to account for variations in test sample distribution and dataset distributions, ensuring the model performs reliably across the datasets used in this study. The architecture is trained and tested on three publicly available datasets Chapman, CPSC 2018, and TNMG each with different class classifications. The CNN model extracts hierarchical features from the Lead II ECG signal. By utilizing publicly available datasets, the performance of the CNN model is evaluated and compared to existing state-of-the-art classification models. Experimental results demonstrate that the CNN model achieved an average F1-score of 98.00%, 96.25%, and 96.50% on the Chapman, CPSC 2018, and TNMG datasets, respectively. Additionally, the model achieved accuracies of 98.13%, 96.30%, and 96.88% on the Chapman, CPSC 2018, and TNMG datasets, respectively, outperforming other current models.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"113 ","pages":"Article 108834"},"PeriodicalIF":4.9,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145322961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Prabu , P. Ganeshkumar , Swapnil M Parikh , Manoranjan Parhi , R. Murugan , Ala Saleh Alluhaidan
{"title":"Optimizing deep learning with attention techniques for improved detection of human monkeypox lesions","authors":"P. Prabu , P. Ganeshkumar , Swapnil M Parikh , Manoranjan Parhi , R. Murugan , Ala Saleh Alluhaidan","doi":"10.1016/j.bspc.2025.108902","DOIUrl":"10.1016/j.bspc.2025.108902","url":null,"abstract":"<div><div>Early and accurate detection of human monkeypox is vital for timely intervention and outbreak control. Traditional diagnostic methods are slow, error-prone, and often struggle to distinguish monkeypox lesions from visually similar skin conditions. To address these challenges, it propose an Optimized Colony Weighted Hybrid Pooling Attentive ConvNet (OCWPC), a novel deep learning framework that integrates Ant Colony Optimization (ACO) for robust feature selection and a Weighted Hybrid Pooling Attention (WHPA) mechanism to enhance lesion-specific feature extraction. The approach leverages multiple preprocessing steps, brightness and contrast enhancement, median filtering, unsharp masking, and Otsu threshold segmentation, followed by Scale-Invariant Feature Transform (SIFT) and Gaussian augmentation to improve feature robustness and generalization. The model was trained and validated on publicly available Human Monkeypox datasets, achieving superior results with 99.49% accuracy, 99.49% precision, 99.49% recall, and 98.98% mAP. Comparative evaluation against state-of-the-art models confirms the effectiveness of OCWPC in minimizing misclassification and improving reliability. These findings highlight the model’s potential for real-world clinical deployment and automated large-scale screening to strengthen monkeypox surveillance and management.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"113 ","pages":"Article 108902"},"PeriodicalIF":4.9,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145322651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhonghe Tian , Aiping Liu , Junxin Chen , Dan Wang , Xun Chen
{"title":"PPG-based continuous arterial blood pressure estimation via multi-scale cross attention fusion","authors":"Zhonghe Tian , Aiping Liu , Junxin Chen , Dan Wang , Xun Chen","doi":"10.1016/j.bspc.2025.108833","DOIUrl":"10.1016/j.bspc.2025.108833","url":null,"abstract":"<div><div>Continuous monitoring of arterial blood pressure (ABP) plays a vital role in the early detection and management of cardiovascular diseases. Most existing ABP estimation methods based on photoplethysmography (PPG) signals typically use convolutional neural network (CNN) to extract local time-domain features, while neglecting the vascular elasticity and hemodynamic characteristics in the frequency-domain. To tackle the issue, we propose a multi-scale cross attention fusion network (MCAFNet). It utilizes the time-domain and frequency-domain information of PPG. Specifically, the network leverages ConvNeXt and Transformer to extract local time-domain and global frequency-domain features, respectively. Transform with sparse attention reduces computational complexity and effectively focuses on the most relevant information. Meanwhile, the feature fusion with cross attention effectively integrates complementary information from both time-domain and frequency-domain, improving the representation ability of features. To verify the effectiveness of our approach, we perform evaluations using the Medical Information Mart for Intensive Care (MIMIC) database. For diastolic blood pressure, mean arterial pressure, and systolic blood pressure, the mean absolute error ± standard deviation are of 1.29 ± 2.11, 1.02 ± 1.47, and 2.48 ± 3.89 mmHg, respectively. This performance meets the standards of Association for the Advancement of Medical Devices (AAMI) and British Hypertension Society (BHS), outperforming current state-of-the-art approaches.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"113 ","pages":"Article 108833"},"PeriodicalIF":4.9,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145322970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"BrainDx: a dual-transformer framework using PVT and SegFormer for tumor diagnosis","authors":"Arshleen Kaur , Vinay Kukreja , Modafar Ati , Ankit Bansal , Shanmugasundaram Hariharan","doi":"10.1016/j.bspc.2025.108917","DOIUrl":"10.1016/j.bspc.2025.108917","url":null,"abstract":"<div><h3>Context</h3><div>Brain tumor diagnosis is challenging due to their complex morphology, indistinct boundaries, and subtle variations in Magnetic Resonance Imaging (MRI) scans. Manual diagnosis is time-consuming and error-prone, making the need for automated systems crucial. Recent advancements in deep learning, particularly in transformer models, have led to improved accuracy and speed in medical image analysis.</div></div><div><h3>Objective</h3><div>This research aims to develop an Artificial Intelligencee (AI) based framework that integrates the Pyramid Vision Transformer (PVT) for tumor classification and the SegFormer for tumor segmentation, thereby enhancing diagnostic accuracy, speed, and reducing human error in brain tumor detection.</div></div><div><h3>Methodology</h3><div>The proposed framework, BrainDX, utilizes PVT to classify MRI images into tumor types (Gliomas, Meningiomas, Pituitary Tumors, and Healthy Brain), and SegFormer to segment tumor regions in real-time. The dataset consists of annotated MRI images that undergo preprocessing (normalization, resizing, and augmentation). The models are trained and evaluated based on performance metrics, including accuracy, Dice score, Intersection over Union (IoU), and segmentation time.</div></div><div><h3>Results</h3><div>The framework was evaluated across three benchmark MRI datasets, achieving a classification accuracy of 94.0% and a Dice score of 0.87 for tumor segmentation. SegFormer demonstrated real-time segmentation, processing MRI images in under 50 ms. Both models maintained high efficiency while delivering robust performance, even in cases of irregular tumor boundaries.</div></div><div><h3>Future Scope</h3><div>Future work will focus on further optimizing the model for real-time clinical use, improving generalization across diverse tumor types and MRI modalities. This AI-powered system has the potential to enhance diagnostic processes and improve patient outcomes significantly.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"113 ","pages":"Article 108917"},"PeriodicalIF":4.9,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145323380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Siyu Liu , Peng Zheng , Haoran Wang , Qingyang Feng , Jiayue Zhao , Manning Wang , Chenxi Zhang , Jianmin Xu
{"title":"Deep-learning-based 3D multi-view multi-parametric MRI fusion model for preoperative T-staging of rectal cancer","authors":"Siyu Liu , Peng Zheng , Haoran Wang , Qingyang Feng , Jiayue Zhao , Manning Wang , Chenxi Zhang , Jianmin Xu","doi":"10.1016/j.bspc.2025.108787","DOIUrl":"10.1016/j.bspc.2025.108787","url":null,"abstract":"<div><div>Deep learning (DL) approaches leveraging multi-parametric magnetic resonance imaging (mpMRI) hold significant promise for the preoperative assessment of rectal cancer T-stage. In this study, we investigate whether a mpMRI fusion-based DL model can effectively evaluate the T-stage of rectal cancer. To enable robust development and comprehensive evaluation of an automated T-staging system, we assembled the largest mpMRI cohort to date, comprising 756 patients from three institutions with nine distinct imaging sequences. We introduce a multi-view multi-parametric (MVMP) MRI fusion model for this purpose. The strategy for effective sequence fusion involves grouping different MRI sequences based on scanning directions and integrating features from each group using an attention module. During evaluations, the MVMP model achieves performance comparable to that of two radiologists in both the internal test cohort (AUC: 0.84 vs. 0.79 vs. 0.79) and the external test cohort (AUC: 0.83 vs. 0.81 vs. 0.75). Moreover, it outperforms other DL competitors in both the internal (AUC: 0.840 vs. 0.766 vs. 0.787) and external test cohorts (AUC: 0.826 vs. 0.792 vs. 0.821). The validity of our design is further substantiated through ablation studies on backbone networks, view-specific branches, and individual sequences. In summary, our DL model based on mpMRI and multi-view fusion accurately evaluates the preoperative T-stage of rectal cancer and shows great promise as a valuable tool for clinical assessment.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"113 ","pages":"Article 108787"},"PeriodicalIF":4.9,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145322875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}