Biomedical Signal Processing and Control最新文献

筛选
英文 中文
An efficient dual-line decoder network with multi-scale convolutional attention for multi-organ segmentation 基于多尺度卷积关注的高效双线解码器网络
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-09-13 DOI: 10.1016/j.bspc.2025.108611
Riad Hassan , M. Rubaiyat Hossain Mondal , Sheikh Iqbal Ahamed , Fahad Mostafa , Md Mostafijur Rahman
{"title":"An efficient dual-line decoder network with multi-scale convolutional attention for multi-organ segmentation","authors":"Riad Hassan ,&nbsp;M. Rubaiyat Hossain Mondal ,&nbsp;Sheikh Iqbal Ahamed ,&nbsp;Fahad Mostafa ,&nbsp;Md Mostafijur Rahman","doi":"10.1016/j.bspc.2025.108611","DOIUrl":"10.1016/j.bspc.2025.108611","url":null,"abstract":"<div><div>Proper segmentation of organs-at-risk is important for radiation therapy, surgical planning, and diagnostic decision-making in medical image analysis. While deep learning-based segmentation architectures have made significant progress, they often fail to balance segmentation accuracy with computational efficiency. Most of the current state-of-the-art (SOTA) methods either prioritize performance at the cost of high computational complexity or compromise accuracy for efficiency. This paper addresses this gap by introducing an efficient dual-line decoder segmentation network (EDLDNet). In addition to noise-free decoder, the proposed method features a noisy decoder, which learns to incorporate structured perturbation at training time for better model robustness, yet at inference time only the noise-free decoder is executed, leading to lower computational cost. Multi-Scale Convolutional Attention Modules (MSCAMs), Attention Gates (AGs), and Up-Convolution Blocks (UCBs) are further utilized to optimize feature representation and boost segmentation performance. By leveraging multi-scale segmentation masks from both decoders, a mutation-based loss function is also utilized to enhance the model’s generalization. The proposed method outperforms SOTA segmentation architectures on four publicly available medical imaging datasets (Synapse, ACDC, SegThor, and LCTSC). EDLDNet achieves SOTA performance with an 84.00% Dice score on the Synapse dataset, surpassing baseline model like UNet by 13.89% in Dice score while significantly reducing Multiply-Accumulate Operations (MACs) by 89.7%. Compared to recent approaches like EMCAD, the proposed EDLDNet not only achieves higher Dice score but also maintains comparable computational efficiency. The outstanding performance across diverse datasets establishes EDLDNet’s outstanding generalization, computational and robustness efficiency. The source code, pre-processed data, and pretrained weights are available at <span><span>https://github.com/riadhassan/EDLDNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108611"},"PeriodicalIF":4.9,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145049243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MS-LKSeg: enhancing multi-semantic synergistic learning with large kernel convolution for medical image segmentation MS-LKSeg:基于大核卷积增强多语义协同学习的医学图像分割
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-09-13 DOI: 10.1016/j.bspc.2025.108635
Bicao Li , Danting Niu , Ruoyu Wang , Bei Wang , Haiyang Liu , Runchuan Li , Xuwei Guo , Wei Li
{"title":"MS-LKSeg: enhancing multi-semantic synergistic learning with large kernel convolution for medical image segmentation","authors":"Bicao Li ,&nbsp;Danting Niu ,&nbsp;Ruoyu Wang ,&nbsp;Bei Wang ,&nbsp;Haiyang Liu ,&nbsp;Runchuan Li ,&nbsp;Xuwei Guo ,&nbsp;Wei Li","doi":"10.1016/j.bspc.2025.108635","DOIUrl":"10.1016/j.bspc.2025.108635","url":null,"abstract":"<div><div>Medical image segmentation is dedicated to accurately distinguish different regions, tissues or lesions in images. The introduce of multiple semantic information, such as shape, texture, and location can provide more clues and bases for segmentation algorithms. However, the effective fusion and utilization of multiple semantic information remains a major challenge. To address this problem, we proposed MS-LKSeg, which aims to explore the synergistic relationship between spatial attention and channel attention at different semantic levels, and to integrate semantic information originating from different levels with rich diversity more efficiently. Specifically, we introduce the Multi-semantic Information Synergy (MIS) block in the encoder of MS-LKSeg, which efficiently captures different semantic spatial structures by extracting features of spatial dimensions (height and width), decomposing them into sub-features, and then passing them through a shared depthwise convolutional layer. Subsequently, it explicitly models long-range dependencies among channels to achieve robust feature interactions, thereby alleviating the disparities among multi-semantic information. Additionally, in the skip connection of MS-LKSeg, we apply depthwise convolution with a large kernel to the encoder’s output to capture broader contextual information and reduce information loss during feature propagation. This approach also contributes to the model maintaining higher accuracy and robustness in segmentation tasks. The superiority of our method is demonstrated by the experimental evaluations performed on multiple publicly available datasets. Our implementation code is available at <span><span>https://github.com/niuniude/MS-LKSeg</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108635"},"PeriodicalIF":4.9,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145049250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal and topographic effects of longer auditory stimuli on slow oscillations during slow wave sleep 长时间听觉刺激对慢波睡眠中慢振荡的时间和地形影响
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-09-13 DOI: 10.1016/j.bspc.2025.108649
Marek Piorecký , Filip Černý , Václava Piorecká , Daniela Dudysová , Jana Kopřivová
{"title":"Temporal and topographic effects of longer auditory stimuli on slow oscillations during slow wave sleep","authors":"Marek Piorecký ,&nbsp;Filip Černý ,&nbsp;Václava Piorecká ,&nbsp;Daniela Dudysová ,&nbsp;Jana Kopřivová","doi":"10.1016/j.bspc.2025.108649","DOIUrl":"10.1016/j.bspc.2025.108649","url":null,"abstract":"<div><div>Closed-loop targeted memory reactivation (CL-TMR) is a novel method for precise targeting and reactivation of selective memories consolidated during sleep. Electrophysiologically, slow oscillations (SOs) are evoked, associated with increased depth of non-rapid eye movement (NREM) 3 sleep. We performed evoked response potential (ERP) analyses on NREM 3 sleep data collected during auditory stimulation with 300 ms sounds. SOs were further characterized using topographical mapping and Hjorth parameters, with trials categorized into upstate and downstate segments based on stimulation phase. Our findings revealed significant differences between spontaneous and evoked SOs in both topographical distribution and signal complexity. Upstate stimulations produced stronger responses in frontal and occipital regions, particularly around the P300 component, suggesting greater cognitive processing than downstate stimulation, confirmed by a subsequent spectral entropy analysis. Finally, time–frequency analyses of post-stimulation EEG, using image-based feature extraction, revealed no distinctions between effects of individual cues. Despite variability in acoustic properties, the evoked SOs remained spectrally similar, indicating similar early processing brain responses across different stimuli and suggesting that using a higher stimuli number may not be optimal for CL-TMR experiments.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108649"},"PeriodicalIF":4.9,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145048399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Medical volume CT-to-MRI translation with multi-dimensional diffusion architecture 医学体积ct - mri转换与多维扩散结构
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-09-13 DOI: 10.1016/j.bspc.2025.108627
Yusen Ni , Ji Ma , Jinjin Chen
{"title":"Medical volume CT-to-MRI translation with multi-dimensional diffusion architecture","authors":"Yusen Ni ,&nbsp;Ji Ma ,&nbsp;Jinjin Chen","doi":"10.1016/j.bspc.2025.108627","DOIUrl":"10.1016/j.bspc.2025.108627","url":null,"abstract":"<div><div>In recent years, there has been a proliferation of novel techniques utilizing neural networks for the generation of images. These models are called generative networks. The diffusion model represents the most popular generative network. It outperforms others in numerous domains, such as image super-resolution, image in-painting, and generating images based on textual descriptions, among others. However, most papers research two-dimensional (2D) image generation. Few focus on three-dimensional (3D) aspects such as video and volumetric data generation. The objective of our research is to develop a method for translating Computed Tomography Volumes (CT Volumes) into Magnetic Resonance Imaging Volumes (MRI Volumes). To achieve this goal, it is necessary to address four challenges: large amounts of memory required, long inference time, short data amounts, and the inaccuracy of the resulting details. Consequently, we use a 3D latent diffusion model and a 2D diffusion model to overcome these challenges. Furthermore, unlike traditional padding methods that pad input first, we introduce a module, defined as a scalable module, which allows the input to adapt different shapes in each layer of the model. We compare our model with the state-of-the-art methods. The experimental results demonstrate that our method outperforms those methods.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108627"},"PeriodicalIF":4.9,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145049251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of paroxysmal atrial fibrillation from non-episodic ECG data using multi-dimensional feature representation and learning 利用多维特征表示和学习方法从非发作性心电数据中检测阵发性心房颤动
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-09-12 DOI: 10.1016/j.bspc.2025.108630
Muqing Deng , Xiaojin Ji , Dandan Liang , Dakai Liang , Yanjiao Wang , Xiaoyu Huang
{"title":"Detection of paroxysmal atrial fibrillation from non-episodic ECG data using multi-dimensional feature representation and learning","authors":"Muqing Deng ,&nbsp;Xiaojin Ji ,&nbsp;Dandan Liang ,&nbsp;Dakai Liang ,&nbsp;Yanjiao Wang ,&nbsp;Xiaoyu Huang","doi":"10.1016/j.bspc.2025.108630","DOIUrl":"10.1016/j.bspc.2025.108630","url":null,"abstract":"<div><div>Paroxysmal atrial fibrillation (PAF) detection based on routine electrocardiogram (ECG) signals is still one of the most challenging problems in research community, since non-episodic ECG fails to diagnose PAF. In this paper, a new PAF detection algorithm based on non-episodic ECG data using multi-dimensional feature representation and learning is proposed. Mean amplitude spectrum (MAS), mel-frequency cepstrum coefficients (MFCC), wavelet packet features (WPFS) and statistical wavelet packet features (SFS) are derived and represented as multi-dimensional image features. These four kinds of cardiac time frequency representations reflect dynamical characteristics during heart beating from four different aspects, which has shown to be more sensitive to detect latent PAF even before visible ECG pathologic changes can be observed. The extracted cardiac representations and deep learning technique are then incorporated, and a parallel DenseNet based feature learning scheme are proposed. Deep features underlying these four kinds of cardiac representations are fused on the decision level to improve classification performance. The appearing ECG test signals can be finally classified according to the min rule based decision-making principle. Experimental results show that accuracies of 81.66%, 85.41%, and 91.25% are achieved on PHY-PAF EEG database under two-fold, five-fold, and ten-fold cross-validations, respectively.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108630"},"PeriodicalIF":4.9,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145049244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classifying Alzheimer’s disease using machine learning: Insights from default mode network alterations 使用机器学习对阿尔茨海默病进行分类:来自默认模式网络改变的见解
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-09-12 DOI: 10.1016/j.bspc.2025.108526
Swarun Raj R.S. , Binish M.C. , Navya V.N. , Vinu Thomas
{"title":"Classifying Alzheimer’s disease using machine learning: Insights from default mode network alterations","authors":"Swarun Raj R.S. ,&nbsp;Binish M.C. ,&nbsp;Navya V.N. ,&nbsp;Vinu Thomas","doi":"10.1016/j.bspc.2025.108526","DOIUrl":"10.1016/j.bspc.2025.108526","url":null,"abstract":"<div><div>Alzheimer’s disease (AD) is a brain disorder that can be fatal and is marked by a progressive loss of cognitive function. It has become a global health concern and is the most frequent type of dementia in the elderly. Although there is currently no effective treatment, there are medications that can halt its progression. For this reason, identification of AD is vital for controlling and limiting the progression of the illness. Here, a machine-learning approach is suggested for detecting AD by examining the alterations in the Default Mode Network (DMN) functional connections. The study utilizes fMRI data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. We extract time-series signals from 11 voxel regions of the DMN, compute functional connectivity using both Pearson correlation and instantaneous phase synchronization, and train various classifiers. A 10-fold cross-validation strategy was employed to ensure robustness and generalizability. Among the classifiers, the linear SVM model achieved the best performance, with an accuracy of 93.33%, sensitivity of 95.56%, and specificity of 91.11% on 10-fold cross-validation. These results outperform prior DMN-based approaches and demonstrate the utility of dynamic synchronization features in early AD diagnosis.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108526"},"PeriodicalIF":4.9,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145049247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LiteFANet: A lightweight UNet-based fusion-attention segmentation network for 2D and 3D medical images LiteFANet:一个轻量级的基于unet的2D和3D医学图像的融合注意分割网络
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-09-12 DOI: 10.1016/j.bspc.2025.108632
Kang Xu , Xiaoming Guo , Bin Pan, Yunhui Zhang, Yezi Liu, Xuan Zhang, Xiao Zeng, Yu Liu
{"title":"LiteFANet: A lightweight UNet-based fusion-attention segmentation network for 2D and 3D medical images","authors":"Kang Xu ,&nbsp;Xiaoming Guo ,&nbsp;Bin Pan,&nbsp;Yunhui Zhang,&nbsp;Yezi Liu,&nbsp;Xuan Zhang,&nbsp;Xiao Zeng,&nbsp;Yu Liu","doi":"10.1016/j.bspc.2025.108632","DOIUrl":"10.1016/j.bspc.2025.108632","url":null,"abstract":"<div><div>Current state-of-the-art 2D and 3D medical image-segmentation methods have achieved remarkable accuracy, yet they usually incur high computational overhead and large model sizes, making deployment on edge devices challenging. To address this issue, we propose LiteFANet, a new lightweight medical image-segmentation model that compresses parameters and reduces computational complexity without noticeably sacrificing segmentation accuracy. Built upon a simplified U-Net backbone, LiteFANet introduces a lightweight multi-branch feature-fusion module for more efficient integration of local and global information. In addition, we design a multi-semantic spatial–channel collaborative attention module that preserves long-range dependency modeling while substantially cutting the computational burden of self-attention. Experiments demonstrate that, even with parameter counts kept below 0.92 M (2D) and 0.59 M (3D), LiteFANet attains outstanding performance on three 2D and 3D benchmark medical segmentation datasets, confirming an excellent trade-off between accuracy and efficiency. Our method is highly practical, and the code can be found at <span><span>https://github.com/CR818-web/LiteFANet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108632"},"PeriodicalIF":4.9,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145048394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive multi-modal temporal fusion network with dynamic synergistic integration for breast cancer survival prediction 基于动态协同整合的自适应多模态时间融合网络用于乳腺癌生存预测
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-09-12 DOI: 10.1016/j.bspc.2025.108640
Haoyu Xue , Hongzhen Xu , Kafeng Wang
{"title":"Adaptive multi-modal temporal fusion network with dynamic synergistic integration for breast cancer survival prediction","authors":"Haoyu Xue ,&nbsp;Hongzhen Xu ,&nbsp;Kafeng Wang","doi":"10.1016/j.bspc.2025.108640","DOIUrl":"10.1016/j.bspc.2025.108640","url":null,"abstract":"<div><div>Breast cancer, as the malignant tumour with the highest incidence rate in women, faces severe challenges in survival prognosis prediction due to its molecular heterogeneity. Currently, multi-modal deep learning-based prediction methods suffer from sample category imbalance, insufficient cross-modal characterization, and defective static fusion strategies. To address these issues, we propose the adaptive multi-modal temporal fusion network (AMTFN). Firstly, an adaptive weighted sample generation mechanism is designed to alleviate the category imbalance by dynamically adjusting the synthesis strategy, which significantly improves the prediction accuracy. Secondly, a CNN-BiLSTM-BiGRU feature extraction network was constructed to extract gene expression data, CNA, and clinical features, respectively, to enhance the cross-modal collaborative characterization. Then, a hierarchical dynamic modal fusion method is proposed to enhance the embedding representation using gating units, and residual fusion is achieved by Transformer encoding with dynamic weight calibration. Finally, in the classification stage, a dynamic synergetic integration mechanism is proposed to enhance the generalization capability through multi-classifier interaction optimization. The experiments show that AMTFN outperforms the comparison method on the METABRIC dataset in several metrics, in which the AUC reaches 97.26%. In addition, validation on the TCGA-BRCA dataset further demonstrates the robustness and generalization ability of AMTFN. The source code can be downloaded from Github: (<span><span>https://github.com/Xue-U/AMTFN</span><svg><path></path></svg></span>).</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108640"},"PeriodicalIF":4.9,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145049248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multi-feature fusion model with temporal convolution and vision transformer for epileptic seizure prediction 基于时间卷积和视觉变换的多特征融合模型预测癫痫发作
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-09-12 DOI: 10.1016/j.bspc.2025.108628
Zepeng Li , Shenyuan Heng , Molei Zhang , Cuiping Xu , Jianbo Lu , Wenjing Xie , Zhengxin Yang , Fei Chai , Bin Hu
{"title":"A multi-feature fusion model with temporal convolution and vision transformer for epileptic seizure prediction","authors":"Zepeng Li ,&nbsp;Shenyuan Heng ,&nbsp;Molei Zhang ,&nbsp;Cuiping Xu ,&nbsp;Jianbo Lu ,&nbsp;Wenjing Xie ,&nbsp;Zhengxin Yang ,&nbsp;Fei Chai ,&nbsp;Bin Hu","doi":"10.1016/j.bspc.2025.108628","DOIUrl":"10.1016/j.bspc.2025.108628","url":null,"abstract":"<div><div>Epilepsy is a disease that affects the brain’s nervous system and is characterized by sudden onset, recurrence, and intractability. Epilepsy seizure prediction through electroencephalogram (EEG) signals and early intervention can greatly improve the quality of life of patients. However, recent seizure prediction methods based on deep learning commonly extract only the temporal feature of EEG signals, which disregard the global feature of EEG signals from all of channels. Besides, appropriate fusion strategy of different features is usually ignored in existing methods. To overcome above issues, we propose a multi-feature fusion model with Temporal Convolution and Vision Transformer (TConv-ViT) for epileptic seizure prediction. Specifically, we first use Wavelet Convolution (WaveConv) and Short-Time Fourier transform (STFT) to extract different EEG features. Then we calculate each channel’s attention and put the weighted features into temporal CNN and vision transformer separately to further extract the local and global features. We also develop a feature coupling unit to guide the two branch’s features flow to each other, and obtain better feature representations. On CHB-MIT dataset, our method achieves a sensitivity of 94.2%, a specificity of 99.7% and our false prediction rate is less than 0.007. We also validate the method on Xuanwu Hospital intracranial EEG dataset and get a sensitivity of 93% on average for three different experimental setups. Experimental results show that compared with the existing methods, the proposed method has a high predictive performance and a low false positive rate, which provides a feasible scheme for the clinical application of EEG-based seizure prediction.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108628"},"PeriodicalIF":4.9,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145048395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GMDM-MoE: A biologically-inspired growth-to-morphology and dual-magnification mixture-of-experts for bacterial detection GMDM-MoE:一种受生物学启发的生长形态学和双重放大专家混合物,用于细菌检测
IF 4.9 2区 医学
Biomedical Signal Processing and Control Pub Date : 2025-09-12 DOI: 10.1016/j.bspc.2025.108639
Lesong Zheng , Yunbo Guo , Ying Liang , Lirong Wang , Siyu Meng , Yiwen Xu , Lei Liu , Yizhi Song , Yuguo Tang
{"title":"GMDM-MoE: A biologically-inspired growth-to-morphology and dual-magnification mixture-of-experts for bacterial detection","authors":"Lesong Zheng ,&nbsp;Yunbo Guo ,&nbsp;Ying Liang ,&nbsp;Lirong Wang ,&nbsp;Siyu Meng ,&nbsp;Yiwen Xu ,&nbsp;Lei Liu ,&nbsp;Yizhi Song ,&nbsp;Yuguo Tang","doi":"10.1016/j.bspc.2025.108639","DOIUrl":"10.1016/j.bspc.2025.108639","url":null,"abstract":"<div><div>Microscopic analysis of bacteria is crucial, thereby accurate and timely bacterial detection is essential, yet manual analysis is labor-intensive. Automated bacterial detection methods improve the efficiency. But they deviate from expert practice and neglect two essential aspects, <em>i.e.,</em> the bacterial temporal growth dynamics and the complementary multi-scale features under different magnifications. As a result, they struggle with some clinical issues such as scale variability, morphological overlap with impurities, and dense clustering.</div><div>We propose <strong>GMDM-MoE: A Biologically-Inspired Growth-to-Morphology and Dual-Magnification Mixture-of-Experts</strong>, which emulates two strategies used by microbiologists. The <strong>GM-pipeline</strong> simulates recalling temporal growth history during single-frame observation: multi-frame pre-training with explicit temporal encoding captures growth dynamics that are transferred to a Morphological Characterization Phase for single-frame inference. The <strong>DM-MoE</strong> simulates switching between magnifications. The global context under the low-magnification and the detailed features under the high-magnification are simultaneously learned through the independent structures of two experts, which are the feature gate router and the specifically designed detection head.</div><div>Experiments on real bacterial datasets show that GMDM-MoE achieves state-of-the-art performance under challenging conditions, demonstrating that biologically inspired designs substantially enhance both accuracy and deployability in bacterial detection.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108639"},"PeriodicalIF":4.9,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145049245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信