Isaac L. Flett , Yunpeng Su , Wade Bennett , Harris Oon , Louis van Zyl , Josephine A. Dixon , Hamish A. Ferguson , Tony Zhou , Cong Zhou , Lui Holder-Pearson , J. Geoffrey Chase
{"title":"Measuring stress using wearable devices","authors":"Isaac L. Flett , Yunpeng Su , Wade Bennett , Harris Oon , Louis van Zyl , Josephine A. Dixon , Hamish A. Ferguson , Tony Zhou , Cong Zhou , Lui Holder-Pearson , J. Geoffrey Chase","doi":"10.1016/j.bspc.2025.108726","DOIUrl":"10.1016/j.bspc.2025.108726","url":null,"abstract":"<div><div>Over 200 million individuals experience anxiety or chronic pain. Assessing the intensity of anxiety and pain involves patient-assessed subjective ratings. However, these are limited by differing personal perceptions and variability. An unbiased, objective stress metric could help overcome these challenges, enabling better management. Many wearable devices provide a stress measurement derived from heart-rate variability, which has been shown to decrease with stress. This study used a Garmin’s smartwatch stress metric to measure the stress of 12 female and 15 male participants during a 20 min long Colour word test (CWT) inducing a stress/anxiety response, and a 90-second Cold pressor test (CPT) inducing a pain response. To quantify the strength of a response, two metrics were defined. These were, Baseline factor, which is participant stress increase relative to basal stress, and Headroom factor, which measures how close the stress response got to 100%. Both are on a 0.00-1.00 scale.</div><div>The mean Baseline and Headroom factors for the CWT were 0.46, and 0.38. The mean Baseline and Headroom factors for the CPT were 0.06, and 0.12. CPT results were less consistent than CWT results, possibly as a result of the CPT’s shorter duration or subject-specific pain responses. There were no clear differences in Baseline or Headroom factor results by sex.</div><div>These results suggest smartwatches could offer an objective metric to measure stress or pain in real-time, with applications in the assessment and management of anxiety and chronic pain, as well as a tool in clinical trials to evaluate the effectiveness of stress-reducing interventions.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"113 ","pages":"Article 108726"},"PeriodicalIF":4.9,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145271193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chupeng Ling , Yiwen Zhang , Chengguang Hu , Naying Liao , Jinlong Zhang , Yuanping Zhou , Wei Yang
{"title":"Early prediction of hepatocellular carcinoma using a risk-embedded longitudinal attention model","authors":"Chupeng Ling , Yiwen Zhang , Chengguang Hu , Naying Liao , Jinlong Zhang , Yuanping Zhou , Wei Yang","doi":"10.1016/j.bspc.2025.108897","DOIUrl":"10.1016/j.bspc.2025.108897","url":null,"abstract":"<div><div>Hepatocellular carcinoma (HCC) frequently arises in patients with liver cirrhosis, and biannual ultrasound surveillance is a cost-effective strategy for its early detection. Longitudinal ultrasound images from routine follow-ups offer critical information for clinical HCC prediction, yet existing models struggle to capture their temporal dynamics. We present the risk embedded and longitudinal attention network (ReLANet), a deep learning framework that fuses diagnostic indicators of cirrhosis progression with cumulative risk data through a spatiotemporal architecture. By incorporating an age-dependent cumulative risk embedding and a longitudinal attention mechanism, ReLANet accommodates variable-length image sequences and dynamically evaluates their predictive value. In experiments on 6,170 samples from 619 cirrhosis patients, ReLANet achieved an area under the receiver operating characteristic curve of 80.2% (95% CI: 75.7%–84.4%), with 75.5% accuracy, 71.0% sensitivity, and 75.8% specificity, outperforming contemporary sequence models. These results demonstrate that ReLANet effectively integrates spatiotemporal and cumulative risk information from longitudinal ultrasound data, offering a state-of-the-art tool to enhance early HCC detection in at-risk populations.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"113 ","pages":"Article 108897"},"PeriodicalIF":4.9,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145271516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xu Lu , Xiaojing Huang , Chenshuo Tang , Yuan Yuan , Haoxin Peng , Miao He , Wenhua Liang , Shaopeng Liu
{"title":"Hypergraph aggregation contrastive learning network for lung cancer prognostic prediction based on tumor microenvironment","authors":"Xu Lu , Xiaojing Huang , Chenshuo Tang , Yuan Yuan , Haoxin Peng , Miao He , Wenhua Liang , Shaopeng Liu","doi":"10.1016/j.bspc.2025.108830","DOIUrl":"10.1016/j.bspc.2025.108830","url":null,"abstract":"<div><div>Lung cancer remains a leading cause of cancer-related deaths globally, with non-small cell lung cancer (NSCLC) accounting for approximately 85% of cases. The tumor microenvironment (TME) plays a crucial role in lung cancer progression and treatment response. Multiplex immunofluorescence (MIF) technology provides a unique perspective for analyzing spatial relationships within the complex TME. However, existing methods for processing MIF pathological images often process each image in isolation, overlooking both intra-patient multi-image complementarity and inter-patient pathological similarities. To address these limitations, we introduce the Hypergraph Aggregation Contrastive Learning Network (HACLN), which constructs a hypergraph to jointly model intra-patient multi-image features and inter-patient pathological relationships. HACLN aggregates features from multiple MIF images per patient, decomposes them into specialized subgraphs, and integrates them to enhance feature discrimination. We validate HACLN using an immunofluorescence image dataset from the First Affiliated Hospital of Guangzhou Medical University, demonstrating its effectiveness in capturing microenvironmental features and modeling patient-to-patient similarities. Here, we show that HACLN achieves a C-index of 0.7023, outperforming existing methods, providing a new direction for future research in lung cancer prognostic prediction based on the tumor microenvironment. Code is available at: <span><span>https://github.com/sujuKyukyu/HACLN_code</span><svg><path></path></svg></span></div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108830"},"PeriodicalIF":4.9,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145265038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peijie Qiu , Jin Yang , Sayantan Kumar , Soumyendu Sekhar Ghosh , Aristeidis Sotiras
{"title":"AgileFormer: Spatially agile and scalable transformer for medical image segmentation","authors":"Peijie Qiu , Jin Yang , Sayantan Kumar , Soumyendu Sekhar Ghosh , Aristeidis Sotiras","doi":"10.1016/j.bspc.2025.108842","DOIUrl":"10.1016/j.bspc.2025.108842","url":null,"abstract":"<div><div>In the past decades, deep neural networks, particularly convolutional neural networks, have achieved state-of-the-art performance in various medical image segmentation tasks. Recently, the introduction of vision transformers (ViTs) has significantly altered the landscape of deep segmentation models, due to their ability to capture long-range dependencies. However, we argue that the current design of the ViT-based UNet (ViT-UNet) segmentation models is limited in handling the heterogeneous appearance (<em>e.g.,</em> varying shapes and sizes) of target objects that are commonly encountered in medical image segmentation tasks. To tackle this limitation, we present a structured approach to introduce spatially dynamic components into a ViT-UNet. This enables the model to capture features of target objects with diverse appearances effectively. This is achieved by three main components: <strong>(i)</strong> deformable patch embedding; <strong>(ii)</strong> spatially dynamic multi-head attention; <strong>(iii)</strong> multi-scale deformable positional encoding. These components are integrated into a novel architecture, termed <strong>AgileFormer</strong>, enabling more effective capture of heterogeneous objects at every stage of a ViT-UNet. Experiments in three segmentation tasks using publicly available datasets (Synapse multi-organ, ACDC cardiac, and Decathlon brain tumor datasets) demonstrated the effectiveness of AgileFormer for 2D and 3D segmentation tasks. Remarkably, our AgileFormer sets a new state-of-the-art performance with a Dice Score of 85.74% and 87.43 % for 2D and 3D multi-organ segmentation on Synapse without significant computational overhead. Our code is avaliable at <span><span>https://github.com/sotiraslab/AgileFormer</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108842"},"PeriodicalIF":4.9,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145265720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jupeng Zhang , Qi Wu , Jinhua Hu , Xiqi Zhu , Baosheng Li
{"title":"Performance evaluation of deep learning algorithms in MRI breast lesion segmentation and detection","authors":"Jupeng Zhang , Qi Wu , Jinhua Hu , Xiqi Zhu , Baosheng Li","doi":"10.1016/j.bspc.2025.108853","DOIUrl":"10.1016/j.bspc.2025.108853","url":null,"abstract":"<div><h3>Purpose</h3><div>This study systematically evaluates the efficacy of deep learning (DL) algorithms for segmenting and detecting breast lesions in magnetic resonance imaging (MRI), focusing on segmentation accuracy and clinical applicability.</div></div><div><h3>Methods</h3><div>Following PRISMA-DTA guidelines, we searched PubMed, Embase, Scopus, and Web of Science, identifying 19 eligible studies. Inclusion criteria included MRI studies using DL for breast lesion segmentation and detection, with comprehensive data on segmentation efficacy. Study quality was assessed using QUADAS-AI. Meta-analysis was performed using random-effects modeling, with segmentation accuracy quantified by the Dice similarity coefficient (DSC) and lesion detection efficacy by sensitivity. Heterogeneity was explored through <em>meta</em>-regression and subgroup analysis.</div></div><div><h3>Results</h3><div>The 19 studies evaluated DL algorithms like U-Net, nnU-Net, and CNN. DSC for segmentation ranged from 0.61 to 0.97, with a pooled DSC of 0.82 (95 % CI: 0.76–0.88). Pooled sensitivity across six studies was 0.86 (95 % CI: 0.75–0.98). Subgroup analyses showed higher accuracy in multicenter studies (0.86 vs. 0.80), studies with external validation (0.89 vs. 0.79), and 3.0 T MRI devices (0.88 vs. 0.83). Intensity normalization also improved accuracy (0.87 vs. 0.79). nnU-Net achieved the highest DSC (0.97). Significant heterogeneity (I<sup>2</sup> = 99.6 %) and publication bias (p = 0.018) were observed.</div></div><div><h3>Conclusion</h3><div>DL algorithms show high accuracy in breast lesion segmentation and detection, particularly in multicenter studies and those with external validation. Future research should optimize algorithms to reduce heterogeneity and validate clinical applicability.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108853"},"PeriodicalIF":4.9,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jianyong Li , Chengbei Li , Lei Yang , Yanhong Liu
{"title":"DDAF-Net: A Dual-Direction Attention Fusion Network for retinal vessel segmentation","authors":"Jianyong Li , Chengbei Li , Lei Yang , Yanhong Liu","doi":"10.1016/j.bspc.2025.108829","DOIUrl":"10.1016/j.bspc.2025.108829","url":null,"abstract":"<div><div>Accurate and effective segmentation of retinal fundus vessels images plays a pivotal role in clinical diagnosis and treatment. However, due to some challenging factors, such as the intricate morphology, low contrast, high background noise, and class imbalance issue of retinal fundus vessels, etc, precise segmentation of retinal fundus vessels remains an exceedingly challenging task. In this paper, a Dual-Direction Attention Fusion Network, abbreviated as DDAF-Net, is presented for the automated segmentation of retinal fundus vessels. To enhance the feature extraction capability of the segmentation network, a dual-encoder block is proposed to obtain stronger feature information. In this case, recurrent convolutions are used in parallel with standard convolution to enable simultaneous extraction of detail information and global contextual information. In addition, to address the problem of loss of detail information caused by multiple pooling operations at the encoder part, a dual-direction skip connection is introduced between the encoder and decoder, to realize effective feature reutilization of fine-grained information and global contextual information to enhance the continuity of the network in blood vessel segmentation. Finally, a joint attention mechanism is proposed in the decoder part, incorporating channel, spatial, and scale attention, to improve the feature extraction capability against morphologically complex fine vessels and lesion-disturbed images. The experimental findings show that the segmentation model proposed in this paper, realizes the extraction of retinal fundus vascular detail information and global contextual information at the same time. In comparison to existing segmentation models, it exhibits superior performance.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"113 ","pages":"Article 108829"},"PeriodicalIF":4.9,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145271192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yongping Lin , Ming Li , Chunxia Chen , Juping Qiu , Jingde Hong , Binhua Dong
{"title":"A cross-modal Chinese radiology report generation approach for cervical cancer","authors":"Yongping Lin , Ming Li , Chunxia Chen , Juping Qiu , Jingde Hong , Binhua Dong","doi":"10.1016/j.bspc.2025.108887","DOIUrl":"10.1016/j.bspc.2025.108887","url":null,"abstract":"<div><div>Magnetic resonance imaging (MRI) is widely used in the pathological evaluation and early diagnosis of cervical cancer (CC). Conventional automatic report generation approaches are predominantly designed for single-image analysis, limiting their applicability to MRI sequences that inherently contain richer temporal and spatial information. Furthermore, sequence-based features may introduce redundancy and noise, challenging model robustness. In this study, we propose a CC Chinese report generation method (C3RG) tailored for CC MRI sequences. The proposed framework incorporates a feature refinement network (FRN) to suppress redundant channel information and enhance salient feature representation. In addition, a cross-modal memory network (CMN) and an interactive feed-forward network (IFFN) are integrated into both the encoder and decoder to facilitate efficient multimodal interaction and alignment between image and text modalities. The model is built upon a Transformer-based encoder–decoder architecture. To support training and evaluation, we construct a dedicated dataset consisting of CC MRI sequences and their corresponding Chinese diagnostic reports. Experimental results demonstrate that C3RG outperforms existing state-of-the-art models, achieving BLEU-1, BLEU-2, BLEU-3, BLEU-4, ROUGE-L, and CIDEr scores of 0.458, 0.319, 0.226, 0.165, 0.379, and 0.264, respectively. Ablation studies further confirm the contribution of each component. These results indicate that C3RG holds promise for clinical deployment in automated radiology reporting for CC.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108887"},"PeriodicalIF":4.9,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kan Zhang , Mugang Lin , Lingzhi Zhu , Yunhui Wang , Wenzhuo He
{"title":"KA4GANC: A Kolmogorov–Arnold graph attention network approach for predicting gene regulations using single-cell RNA-sequencing data","authors":"Kan Zhang , Mugang Lin , Lingzhi Zhu , Yunhui Wang , Wenzhuo He","doi":"10.1016/j.bspc.2025.108868","DOIUrl":"10.1016/j.bspc.2025.108868","url":null,"abstract":"<div><div>Gene regulatory networks (GRNs) have revealed the internal mechanism and complex relationship of gene expression regulation, and its research is of great significance for the in-depth understanding of life activities and accurate disease diagnosis. Although the existing methods can realize the expression analysis at the cell level with the promotion of single-cell RNA sequencing(scRNA-seq) technology, most focus on the interaction between local genes, and it is difficult to capture the overall organizational structure and long-term regulatory effects. In this study, we present a novel supervised method named KA4GANC, which integrates the Kolmogorov–Arnold Network (KAN) with a Graph Attention Network (GAT) to address the critical limitations in capturing global regulatory architecture from scRNA-seq data. KA4GANC’s novelty lies in two key components. First, it leverages a Fourier KAN for nonlinear feature transformation via adaptive, multi-scale Fourier basis functions, thereby generating highly expressive gene embeddings. Second, it replaces linear transformations in graph attention layers with a KAN-based convolution, enabling the model to learn complex nonlinear local interactions and effectively preserve neighborhood topology in the latent space. Benchmark evaluations on seven scRNA-seq datasets across three ground-truth network types demonstrate KA4GANC’s state-of-the-art performance, achieving average AUROC of 0.84 with 34.6% faster convergence.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"113 ","pages":"Article 108868"},"PeriodicalIF":4.9,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145271520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A pectoral muscle suppression approach for improved deep learning-based mammogram image analysis","authors":"Jyoti Chowdhary , Praveen Sankaran , Shailaj Kurup","doi":"10.1016/j.bspc.2025.108843","DOIUrl":"10.1016/j.bspc.2025.108843","url":null,"abstract":"<div><div>Breast cancer persists as a major health concern for women globally, and the best course of treatment depends on early detection. Although mammography is widely used as a monitoring tool, its limitations in accurately identifying subtle early-stage lesions and classifying malignant tumors persist. This research aims to develop an advanced mammogram analysis system that prioritizes the identification and classification of malignant tumors. The proposed methodology includes data preprocessing, pectoral muscle suppression, precise tumor localization, and subsequent classification into malignant or benign categories. To ensure a good level of precision in tumor detection, minimizing the disruption caused by the pectoral muscle is imperative. Effective suppression of muscle tissue improves image quality and facilitates precise identification of potential tumors. The publicly available CBIS-DDSM and VinDr-Mammo dataset were utilized for model training and testing. The proposed methodology, which integrates YOLOV8s with pectoral muscle suppression, achieved an accuracy of 97.94 ± 0 69%, a precision of 98.77%, and a recall of 96.98% when the CBIS-DDSM data set is used. An accuracy of 99.70 ± 0.15%, a precision of 100%, and a recall of 99.39% are achieved when using the VinDr-Mammo dataset. The combination of CBIS-DDSM and VinDr-Mammo is then used to train the model and is tested on a private dataset (NITC-MVR) to test its performance in a real-world clinical setting. This heterogeneous test resulted in an overall accuracy rate of 95.48% with a precision of 97.72% and a recall of 94.50%.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108843"},"PeriodicalIF":4.9,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145265544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PixelINR: Scan-specific self-supervised MRI reconstruction based on implicit neural representations","authors":"Songxiao Yang , Yafei Ou , Masatoshi Okutomi","doi":"10.1016/j.bspc.2025.108838","DOIUrl":"10.1016/j.bspc.2025.108838","url":null,"abstract":"<div><div>Accelerated MRI involves a trade-off between sampling sufficiency and acquisition time. Although supervised and self-supervised deep learning approaches have shown promise in reconstructing under-sampled MR images, they typically rely on large-scale training datasets. This dependence increases the risk of overfitting and hallucinated features, particularly when training data diverges from test-time distributions. In this paper, we propose PixelINR, a scan-specific, self-supervised reconstruction method based on implicit neural representations (INR) that requires only a single under-sampled scan for training. By eliminating the need for external training databases, scan-specific PixelINR mitigates hallucination risks and improves generalization to diverse acquisition settings. To further enhance image quality, we incorporate anti-blurriness regularization in the image domain and a frequency-domain inpainting loss, guiding the model to recover sharp structures and plausible k-space content. Experimental results demonstrate that PixelINR outperforms existing scan-specific approaches in both reconstruction accuracy and robustness. Our implementation is publicly available at: <span><span>https://github.com/YSongxiao/PixelINR</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108838"},"PeriodicalIF":4.9,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145265985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}