Biomedical Signal Processing and Control最新文献

筛选
英文 中文
CVI-UNet: A multi-segmentation U-Net for automated choroidal vascularity index calculation from EDI-OCT images CVI-UNet:用于自动计算EDI-OCT图像脉络膜血管指数的多分割U-Net
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-07-18 DOI: 10.1016/j.bspc.2025.108319
Juwaria Qadri , Angel Arul Jothi J. , Manish Jain
{"title":"CVI-UNet: A multi-segmentation U-Net for automated choroidal vascularity index calculation from EDI-OCT images","authors":"Juwaria Qadri ,&nbsp;Angel Arul Jothi J. ,&nbsp;Manish Jain","doi":"10.1016/j.bspc.2025.108319","DOIUrl":"10.1016/j.bspc.2025.108319","url":null,"abstract":"<div><div>The choroidal vascularity index (CVI) has become a useful measure for monitoring ocular health in inflammatory and vascular disorders. It is also a valuable parameter to monitor the potential adverse effect of drugs that modulate choroidal blood flow. This research aims to automatically segment the choroid as well as the vessels for accurate CVI calculation in enhanced depth imaging-optical coherence tomography (EDI-OCT) images. For this, CVI-UNet, a multi-segmentation U-Net with a channel attention module (CAM) and a vessel enhancement block (VEB) is proposed. The proposed model has a common encoder that is shared between the choroid segmentation decoder and the vessel segmentation decoder. It is capable of automatically segmenting the entire choroid region, as well as the vessels present in the choroid simultaneously in an end-to-end fashion from the input OCT images. The segmentation performance of CVI-UNet is quantitatively assessed on a completely anonymized dataset. The proposed method yielded an accuracy, dice similarity coefficient and jaccard index of 97.04%, 96.83% and 94.25% respectively for choroid segmentation and 95.58%, 93.99% and 92.56% respectively for vessel segmentation. The reliability of the CVI values and the agreement of the CVI values computed using the manual and the automated methods are also evaluated. Our results indicate that the CVI-UNet can accurately and efficiently segment choroid and vessels from the OCT images, which will be helpful to quantify the CVI.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"111 ","pages":"Article 108319"},"PeriodicalIF":4.9,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144655325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing MRI reconstruction: A systematic review of deep learning and compressed sensing integration 推进MRI重建:深度学习和压缩感知集成的系统综述
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-07-18 DOI: 10.1016/j.bspc.2025.108291
Mojtaba Safari, Zach Eidex, Chih-Wei Chang, Richard L.J. Qiu, Xiaofeng Yang
{"title":"Advancing MRI reconstruction: A systematic review of deep learning and compressed sensing integration","authors":"Mojtaba Safari,&nbsp;Zach Eidex,&nbsp;Chih-Wei Chang,&nbsp;Richard L.J. Qiu,&nbsp;Xiaofeng Yang","doi":"10.1016/j.bspc.2025.108291","DOIUrl":"10.1016/j.bspc.2025.108291","url":null,"abstract":"<div><div>Magnetic resonance imaging (MRI) is a non-invasive imaging modality and provides comprehensive anatomical and functional insights into the human body. However, its long acquisition times can lead to patient discomfort, motion artifacts, and limiting real-time applications. To address these challenges, strategies such as parallel imaging have been applied, which utilize multiple receiver coils to speed up the data acquisition process. Additionally, compressed sensing (CS) is a method that facilitates image reconstruction from sparse data, significantly reducing image acquisition time by minimizing the amount of data collection needed. Recently, deep learning (DL) has emerged as a powerful tool for improving MRI reconstruction. It has been integrated with parallel imaging and CS principles to achieve faster and more accurate MRI reconstructions. This review comprehensively examines DL-based techniques for MRI reconstruction. We categorize and discuss various DL-based methods, including end-to-end approaches, unrolled optimization, and federated learning, highlighting their potential benefits. Our systematic review highlights significant contributions and underscores the potential of DL in MRI reconstruction. Additionally, we summarize key results and trends in DL-based MRI reconstruction, including quantitative metrics, the dataset, acceleration factors, and the progress of and research interest in DL techniques over time. Finally, we discuss potential future directions and the importance of DL-based MRI reconstruction in advancing medical imaging. To facilitate further research in this area, we provide a GitHub repository that includes up-to-date DL-based MRI reconstruction publications and public datasets-<span><span>https://github.com/mosaf/Awesome-DL-based-CS-MRI</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"111 ","pages":"Article 108291"},"PeriodicalIF":4.9,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144655326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing clinical data security with the contextual polynomial-based data protection model (CPDPM) 基于上下文多项式的数据保护模型(CPDPM)增强临床数据安全性
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-07-17 DOI: 10.1016/j.bspc.2025.108329
D. Dhinakaran , R. Ramani , S. Edwin Raja , D. Selvaraj
{"title":"Enhancing clinical data security with the contextual polynomial-based data protection model (CPDPM)","authors":"D. Dhinakaran ,&nbsp;R. Ramani ,&nbsp;S. Edwin Raja ,&nbsp;D. Selvaraj","doi":"10.1016/j.bspc.2025.108329","DOIUrl":"10.1016/j.bspc.2025.108329","url":null,"abstract":"<div><div>Today, the healthcare industry mostly uses the electronic record of the patient’s health, also known as the electronic health record or simply EHR. However, since the health records contain highly confidential patient information shared across computerized systems, it is under threat from hackers and cybercriminals, data theft and unauthorized access. Healthcare information is sensitive, requires to be whole and accessible and this is a major challenge that needs strong security solutions. Here, we present the Contextual Polynomial-Based Data Protection Model (CPDPM), a new additive model designed to improve clinical data security through the use of advanced encryption and access control methods in conjunction with a polynomial-based data protection model, specifically developed for the healthcare context. The major issues we observed regarding clinical data protection are based around the issues of encryption strength as well as the consequences, such as the performance and utilization of resources. Furthermore, access restrictions and data integrity have to be dynamic and respond to changes of a system, especially if it involves multiple parties. Our approach handles these considerations since the proposed polynomial-based framework guarantees the security of the data as well as scalability to huge healthcare systems. We determined the performance of the proposed model by assessing restrictive access system, and the encryption and decryption time analysis, data security, through put, and network overhead analysis using real EHR datasets. In comparison with other models such CP-BDHCA, EHRC, B-IBE and HH-IPFS, our model gave better results. For example, CPDPM had higher performance than HH-IPFS by 9 % and then CP-BDHCA by 18 % in terms of Access Restriction Performance. For Encryption Performance, our proposed model was 8 % more efficient than B-IBE, and for decryption performance, the findings also reveal that our model is 14 % more efficient than HH-IPFS. Moreover, Security Performance indicated a coverage of at least 20 % in comparison to conventional data security, and throughput performance recorded improvement of 12 % responding to existing systems.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"111 ","pages":"Article 108329"},"PeriodicalIF":4.9,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144655401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Brain tumor detection and classification using u-net and CNN with brain texture pattern analysis 利用u-net和CNN结合脑纹理模式分析进行脑肿瘤检测与分类
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-07-17 DOI: 10.1016/j.bspc.2025.108156
Ben M. Jebin , S. Immaculate Shyla , K. Nalini Sujantha Bel , C. Jaspin Jeba Sheela
{"title":"Brain tumor detection and classification using u-net and CNN with brain texture pattern analysis","authors":"Ben M. Jebin ,&nbsp;S. Immaculate Shyla ,&nbsp;K. Nalini Sujantha Bel ,&nbsp;C. Jaspin Jeba Sheela","doi":"10.1016/j.bspc.2025.108156","DOIUrl":"10.1016/j.bspc.2025.108156","url":null,"abstract":"<div><div>Magnetic Resonance Imaging (MRI) based brain tumor detection and classification are essential but challenging procedures in clinical diagnosis. Timely diagnosis and precise localization of brain tumors help to save lives and aid physicians in applying suitable treatment procedures. Recently, deep-learning approaches have significantly assisted physicians in accurately diagnosing tumor regions and types. This paper proposes a brain tumor detection and classification approach using a convolutional neural network (CNN) with U-net architecture and Brain texture pattern. Usage of enhanced tumor region while preserving the texture pattern of the remaining brain region highly improves the localization performance of tumor detection. The tumor present in the image is initially detected using a modified U-Net architecture that minimizes the elimination of fine descriptors with the usage of complementary kernels. Further, the brain texture pattern is collected using a 2D-empirical mode decomposition followed by a fuzzy C-means clustering algorithm. The detected tumor is then implanted in its extracted brain texture pattern in its location. The tumor-implanted brain texture pattern is trained using a Complementary kernel-based CNN to classify the brain tumor types namely Pituitary tumor, Non-tumor, Glioma tumor, and Meningioma tumors. The proposed U-Net and CNN use complementary kernels that can extract fine features. The evaluation of the algorithm was performed using the datasets namely SARTAJ, Br35H, and Figshare with measures such as accuracy, sensitivity, specificity, F1-score, and precision. The proposed tumor detection process results in an accuracy of 96.69 %, 97.31 %, and 98.18 % when evaluated utilizing the SARTAJ, Br35H, and Figshare datasets respectively.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"110 ","pages":"Article 108156"},"PeriodicalIF":4.9,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144653760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Age prediction from infancy to adolescence at the onset of N2 sleep using an ECG-derived tachogram 利用心电图衍生的速度图预测婴儿期至青春期N2期睡眠开始时的年龄
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-07-16 DOI: 10.1016/j.bspc.2025.108234
Kartik K. Iyer , James A. Roberts , Michaela Waak , Simon J. Vogrin , Ajay Kevat , Jasneek Chawla , Leena M. Haataja , Leena Lauronen , Sampsa Vanhatalo , Nathan J. Stevenson
{"title":"Age prediction from infancy to adolescence at the onset of N2 sleep using an ECG-derived tachogram","authors":"Kartik K. Iyer ,&nbsp;James A. Roberts ,&nbsp;Michaela Waak ,&nbsp;Simon J. Vogrin ,&nbsp;Ajay Kevat ,&nbsp;Jasneek Chawla ,&nbsp;Leena M. Haataja ,&nbsp;Leena Lauronen ,&nbsp;Sampsa Vanhatalo ,&nbsp;Nathan J. Stevenson","doi":"10.1016/j.bspc.2025.108234","DOIUrl":"10.1016/j.bspc.2025.108234","url":null,"abstract":"<div><h3>Background</h3><div>Biological age is a key concept in the development of biomarkers of health and disease. We develop a prediction of the functional autonomic age (FAA) from infancy to adolescence based on the ECG-derived tachogram recorded at the onset of N2 sleep.</div></div><div><h3>Methods</h3><div>A cohort of ECG recordings from 1004 typically developing infants, children and adolescents (age range: 1 month to 17 years) was used to train feature-based and deep neural network-based regression models for the prediction of FAA. Weighted mean absolute error (wMAE) was used to define accuracy and evaluated with 10-fold cross-validation. Effect size was used to compare model accuracies and linear regression was used to evaluate confounds. The combination of FAA with an EEG-based estimate of functional brain age (FBA) was also tested.</div></div><div><h3>Results</h3><div>A feature-based FAA had a wMAE of 1.78 years (95 %CI: 1.62–1.93, <em>n</em> = 1004) and was comparable to deep neural network regression (wMAE = 1.85 years, 95 %CI: 1.66–1.98). Accuracy was affected by age and age<sup>2</sup> (t = -4.97, p &lt; 0.001 and t = 9.66, p &lt; 0.001, respectively) with smaller errors at younger ages, but not biological sex (t = -0.660, p = 0.510). Combining the FAA with a functional brain age derived from the EEG resulted improved accuracy, with neural networks-based methods superior (wMAE of 0.81 years, 95 %CI: 0.73–0.88, effect size D = 0.77, 95 %CI: 0.70–0.84, <em>n</em> = 1004).</div></div><div><h3>Conclusion</h3><div>FAA derived from the tachogram accurately represents age from infancy to adolescence. The combination of FAA with FBA improves age prediction accuracy.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"111 ","pages":"Article 108234"},"PeriodicalIF":4.9,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144655400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Radiology report generation from a singular perspective using transformers with Knowledge Distillation 使用知识蒸馏转换器从单一角度生成放射学报告
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-07-16 DOI: 10.1016/j.bspc.2025.108340
Asad Mansoor Khan , Mashood Mohammad Mohsan , Muhammad Usman Akram , Taimur Hassan , Sajid Gul Khawaja , Adil Qayyum
{"title":"Radiology report generation from a singular perspective using transformers with Knowledge Distillation","authors":"Asad Mansoor Khan ,&nbsp;Mashood Mohammad Mohsan ,&nbsp;Muhammad Usman Akram ,&nbsp;Taimur Hassan ,&nbsp;Sajid Gul Khawaja ,&nbsp;Adil Qayyum","doi":"10.1016/j.bspc.2025.108340","DOIUrl":"10.1016/j.bspc.2025.108340","url":null,"abstract":"<div><div>Nearly two billion chest X-rays (CXRs) are performed annually, making them the most used imaging technique in radiology for the diagnosis of pulmonary disorders. The accompanying report with the findings from a chest X-ray forms a crucial part of the examination. By providing an accurate report, healthcare professionals can be enabled to make better decisions about the care being provided. To this end, we propose an end-to-end radiology report generation framework built on transformers trained on text reports in conjunction with visual characteristics of the chest X-ray to generate a reliable report that astutely describes the findings from a single CXR taken either from the Anterior-Posterior or Posterior-Anterior position. A foundation model is utilised to perform Knowledge Distillation (KD) in conjunction with the Encoder which is fine-tuned during the training phase. In addition, using a large corpus of radiology reports to pre-train the foundation model in an unsupervised manner is shown to improve the performance on smaller datasets. This training methodology results in comparable performance to architectures that employ a lot more parameters. The proposed framework is evaluated on multiple datasets including the Indiana University dataset, MIMIC dataset, MIMIC-PRO dataset, and BRAX dataset. The incorporation of KD results in an increase of BLEU-1 score for Indiana dataset by 4% and BERTScore by 7.5%. Similarly, pre-training on larger datasets in combination with KD, further increases BLEU-1 score for Indiana dataset by 7.2% and BERTScore by 3%. For MIMIC dataset, comparable performance is achieved for the Findings and the Impression sections of the report while the proposed framework outperforms other techniques when both of these sections are combined. For MIMIC-PRO dataset, an s<span><math><msub><mrow></mrow><mrow><mi>e</mi><mi>m</mi><mi>b</mi></mrow></msub></math></span> score of 0.4069 while a RadGraph F1 score of 0.1165 is achieved outperforming other techniques in the literature. Finally, the proposed framework is also evaluated on locally gathered dataset and BRAX subset without any re-training or fine-tuning resulting in BLEU-1 score of 0.3827 and a BERTScore of 0.4392 for the former and BLEU-1 score 0.1671 of and a BERTScore of 0.2186 for latter showing generalisation ability.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"111 ","pages":"Article 108340"},"PeriodicalIF":4.9,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144632990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Type 2 diabetes mellitus associated pancreatic cancer prediction using combinations of machine learning models 结合机器学习模型预测2型糖尿病相关胰腺癌
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-07-16 DOI: 10.1016/j.bspc.2025.108240
Surabhi Seth , Kumardeep Chaudhary , Srinivasan Ramachandran
{"title":"Type 2 diabetes mellitus associated pancreatic cancer prediction using combinations of machine learning models","authors":"Surabhi Seth ,&nbsp;Kumardeep Chaudhary ,&nbsp;Srinivasan Ramachandran","doi":"10.1016/j.bspc.2025.108240","DOIUrl":"10.1016/j.bspc.2025.108240","url":null,"abstract":"<div><h3>Background</h3><div>Type 2 Diabetes Mellitus (T2DM) patients face an increased risk of developing pancreatic cancer (PaC), with studies reporting relative risk of 1.94 (confidence interval 1.66–2.27). Our goal was to identify multiple T2DM-PaC comorbidity genes and develop machine learning (ML) models for predicting T2DM-PaC comorbidity using transcriptomics gene expression datasets from blood.</div></div><div><h3>Methods</h3><div>Comorbidity genes from literature were extracted using Natural language processing. Using publicly available T2DM-PaC gene expression datasets we extracted differentially expressed genes, hub genes of co-expressed modules in weighted gene correlation network analysis, and highly perturbed genes from pathway simulations. We explored a wide range of ten ML algorithms spanning Linear Classifiers, Tree-Based Methods, Gradient Boosting Methods, and Naive Bayes Classifiers and different combinations of algorithm. For T2DM-PaC comorbidity prediction we constructed two different ML models one for T2DM and other for PaC, using T2DM-PaC comorbidity features.</div></div><div><h3>Results</h3><div>Sixty-seven T2DM-PaC comorbidity genes features were identified in total, among these ATM genes are already used in PaC diagnosis. In the T2DM model, the Logistic Regression Classifier-Support Vector Machine combination achieved an F1 score of 0.80 and Matthews Correlation Coefficient (MCC) of 0.65. In the PaC model, the Guassian Naive Bayes-eXtreme Gradient Boosting combination had an F1 score of 0.96 and MCC of 0.94. The T2DM-PaC ensemble model tested on a T2DM-PaC comorbidity dataset had an F1 score of 0.89 and MCC of 0.77.</div></div><div><h3>Conclusion</h3><div>Built ensemble ML models could identify T2DM-PaC comorbidity with 89 % accuracy. These ML models could aid in screening for PaC in T2DM patients and are available at <span><span>https://github.com/suroseth/T2DM-PaC_comorbidity_predictor</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"111 ","pages":"Article 108240"},"PeriodicalIF":4.9,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144632988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evolution of waveform characteristics in motor imagery among healthy individuals 健康人运动意象波形特征的演变
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-07-16 DOI: 10.1016/j.bspc.2025.108263
Chien-Hung Yeh , Chuting Zhang , Wenbin Xu , Wenbin Shi
{"title":"Evolution of waveform characteristics in motor imagery among healthy individuals","authors":"Chien-Hung Yeh ,&nbsp;Chuting Zhang ,&nbsp;Wenbin Xu ,&nbsp;Wenbin Shi","doi":"10.1016/j.bspc.2025.108263","DOIUrl":"10.1016/j.bspc.2025.108263","url":null,"abstract":"<div><div>Motor imagery affects brain activity patterns across frequencies, yet most studies primarily focused on power changes, overlooking intrinsic oscillatory characteristics including waveform nonlinearity and sharpness. To this end, we introduced ensemble empirical mode decomposition (EEMD) to access evolving waveforms, preserving the temporal features of the raw signal across scales. This study used EEG data collected from 20 healthy participants over three consecutive days, randomly assigned to real or sham neurofeedback groups. Each participant completed multiple sessions with or without motor imagery training, and the real-time feedback was based on beta burst detection over the contralateral motor cortex (C3/C4) in the neurofeedback phase. We demonstrated the superiority of EEMD in preserving evolving waveforms of decompositions compared to traditional methods, and systematically compared the degree of nonlinearity, sharpness, and averaged power across frequency bands in motor imagery tasks for healthy individuals. Following neurofeedback training, both the degree of nonlinearity and averaged power in the gamma band exhibited a significant increase, whereas averaged power and sharpness in the low-beta band decreased compared to the no-training condition. The waveform features exhibited an elevated classification performance about power features, improving motor imagery detection accuracies from 76.7% to 82.0%. These findings suggest the significance of waveform characteristics as useful biomarkers alongside average power in identifying motor imagery engagement. The proposed method provides theoretical support for the potential application in motor imagery-related rehabilitation.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"111 ","pages":"Article 108263"},"PeriodicalIF":4.9,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144632986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data fitting for the neural mass model using Unscented Kalman Filters 应用无气味卡尔曼滤波器对神经质量模型进行数据拟合
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-07-16 DOI: 10.1016/j.bspc.2025.108303
Renjie Li , Miao Dong , Dun Ao , Xian Liu
{"title":"Data fitting for the neural mass model using Unscented Kalman Filters","authors":"Renjie Li ,&nbsp;Miao Dong ,&nbsp;Dun Ao ,&nbsp;Xian Liu","doi":"10.1016/j.bspc.2025.108303","DOIUrl":"10.1016/j.bspc.2025.108303","url":null,"abstract":"<div><div>Epilepsy is a neurological disorder characterized by seizures, and it remains challenging because many patients do not respond to current drug treatments. Despite advancements in brain dynamics research, existing methods often fail to capture the personalized and dynamic nature of epileptic seizures, particularly in real-time situations.</div><div>In this study, the Unscented Kalman Filter (UKF) and the Neural Mass Model (NMM) were used to model the electroencephalogram (EEG) signals of epileptic patients, with the aim of characterizing different stages of epileptic seizures. Based on the results of parameter identification, we developed personalized epileptic seizure models to provide accurate estimates of brain states at different stages of seizures. Our findings suggest that these personalized models can offer valuable insights for closed-loop control strategies, advancing personalized treatment approaches for epilepsy.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"111 ","pages":"Article 108303"},"PeriodicalIF":4.9,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144632989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convolutional masked encoder and contrastive learning for small-scale medical language-image pre-training 基于卷积掩码编码器和对比学习的小规模医学语言图像预训练
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-07-16 DOI: 10.1016/j.bspc.2025.108308
Lei Liu, Xiangdong Su, Xiaoming Wang, Guanglai Gao
{"title":"Convolutional masked encoder and contrastive learning for small-scale medical language-image pre-training","authors":"Lei Liu,&nbsp;Xiangdong Su,&nbsp;Xiaoming Wang,&nbsp;Guanglai Gao","doi":"10.1016/j.bspc.2025.108308","DOIUrl":"10.1016/j.bspc.2025.108308","url":null,"abstract":"<div><div>We present an efficient pre-training method for small-scale medical language-image tasks, incorporating a convolutional masked encoder (CME) and small-scale language-image contrastive learning pre-training (S<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>LIP). Our CMES<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>LIP (Convolutional Masked Encoder with Small-scale Language-Image Pre-training) method integrates concepts from two powerful pre-training techniques, self-supervised MAE (Masked Autoencoder) and CLIP (Contrastive Language-Image Pre-training), enhancing the synergy between medical images and text. Comprising two crucial components, CME and S<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>LIP, CMES<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>LIP demonstrates remarkable efficacy. CME excels on small-scale image datasets, leveraging a UNet encoder for masked image extraction, a UNet decoder for image reconstruction, and an improved pre-training loss function. These refinements significantly improve the reconstruction of masked images, outperforming MAE and delivering remarkable downstream medical task improvements. To adapt CLIP for small-scale datasets, we design S<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>LIP, which replaces traditional Transformers with the text embedding layer as the text encoder while integrating the UNet encoder as the image encoder. This strategic adjustment accelerates model convergence and yields substantial downstream task improvements. By combining CME and the S<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>LIP, CMES<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>LIP emerges as a groundbreaking multimodal pre-training approach. CME effectively enhances unimodal tasks such as medical image classification and medical image segmentation, even with a limited amount of pre-training data. CMES<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>LIP efficiently improves medical image–text retrieval, medical image–text classification, and medical visual question answering by pre-training only sixty thousand image–text pairs.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"111 ","pages":"Article 108308"},"PeriodicalIF":4.9,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144633496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信