IEEE Journal of Biomedical and Health Informatics最新文献

筛选
英文 中文
Swallow-PPG: Photoplethysmography Templates for Comprehensive Temporal Analysis of Swallowing Anatomical Actions. 吞咽- ppg:用于吞咽解剖动作综合时间分析的光容积脉搏图模板。
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2025-07-18 DOI: 10.1109/JBHI.2025.3590667
Ying Zhang, Junjie Li, Ping Wang, Huaiyu Zhu, Bo Wang, Wei Luo, Yun Pan
{"title":"Swallow-PPG: Photoplethysmography Templates for Comprehensive Temporal Analysis of Swallowing Anatomical Actions.","authors":"Ying Zhang, Junjie Li, Ping Wang, Huaiyu Zhu, Bo Wang, Wei Luo, Yun Pan","doi":"10.1109/JBHI.2025.3590667","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3590667","url":null,"abstract":"<p><p>In clinical practice, Videofluoroscopic Swallowing Study (VFSS) is commonly used to monitor the activity of anatomical structures during swallowing. However, it is limited by ionizing radiation exposure, adverse effects of barium contrast agents, and the high cost of specialized equipment. In this study, we propose a framework for analyzing swallowing behaviors in photoplethysmography (PPG) waveforms, which includes generalizing the manifestation of swallowing in PPG (i.e., swallowing templates generation) and conducting comprehensive temporal analysis of swallowing anatomical actions (TASAA). For swallowing templates generation, we cluster and average the samples to obtain waveforms of templates, followed by conducting shape-based mapping and averaging on 28 time indicators to derive template unified time indicators (TUTIs). For comprehensive TASAA, we leverage templates waveforms and TUTIs to estimate time indicators based on the mapping relationship between samples and their respective templates. We evaluate the proposed framework on 357 swallowing PPG samples from 41 elderly subjects. The average relative error across all time indicators is 0.123, and 6 indicators notably excel with errors below 0.1. The proposed template-based swallowing analysis framework is expected to become a low-cost and non-ionizing alternative to VFSS for comprehensive TASAA.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144663897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Automatic 3D PET Tumor Segmentation Framework Assisted by Geodesic Sequences. 基于测地序列的三维PET肿瘤自动分割框架。
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2025-07-18 DOI: 10.1109/JBHI.2025.3590392
Lin Yang, Dan Shao, Chuanli Cheng, Chao Zou, Zhenxing Huang, Hairong Zheng, Dong Liang, Zhi-Feng Pang, Xue-Cheng Tai, Zhanli Hu
{"title":"An Automatic 3D PET Tumor Segmentation Framework Assisted by Geodesic Sequences.","authors":"Lin Yang, Dan Shao, Chuanli Cheng, Chao Zou, Zhenxing Huang, Hairong Zheng, Dong Liang, Zhi-Feng Pang, Xue-Cheng Tai, Zhanli Hu","doi":"10.1109/JBHI.2025.3590392","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3590392","url":null,"abstract":"<p><p>Positron Emission Tomography (PET) images reflect the metabolic rate of tracers in different tissues of the human body, crucial for early cancer diagnosis and treatment. Accurate tumor segmentation is essential to aid clinicians in determining drug dosages. Due to the low resolution of PET images, prior information (such as CT, MRI or distance information) are often incorporated to assist PET segmentation. In this paper, we propose an automatic 3D PET tumor segmentation framework assisted by geodesic sequences. Specifically, considering the intrinsic characteristics of PET images, we first construct geodesic prior, which effectively enhances the contrast between the tumor and background while suppressing noise and the influence of other tissues. To address the need for seed points in the geodesic prior, an automatic marking strategy is designed that identifies all suspected lesion regions and uses their central points as a series of seeds to generate the corresponding geodesic sequences. Subsequently, we develop a three-branch network architecture to simultaneously process PET images, geodesic sequences, and background geodesic information. To enhance image features, a distance attention mechanism is introduced at the end of the network encoder to effectively measure the similarity between different geodesic features, refining the image features. Finally, the network incorporates spatial regularization and local PET intensity information into the activation function via the Soft Threshold Dynamics with Local Intensity Fitting (STDLIF) module, further improving segmentation accuracy. Experimental results demonstrate that, compared to existing state-of-the-art algorithms, the proposed method shows better segmentation performance on both clinical and public datasets.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144663894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Group Information Guided Smooth Independent Component Analysis Method for Multi-Subject fMRI Data Analysis. 群信息引导下多主体fMRI数据分析的光滑独立分量分析法。
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2025-07-18 DOI: 10.1109/JBHI.2025.3590641
Yuhui Du, Chen Huang, Vince D Calhoun
{"title":"Group Information Guided Smooth Independent Component Analysis Method for Multi-Subject fMRI Data Analysis.","authors":"Yuhui Du, Chen Huang, Vince D Calhoun","doi":"10.1109/JBHI.2025.3590641","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3590641","url":null,"abstract":"<p><p>Group independent component analysis (ICA) has been extensively used to extract brain functional networks (FNs) and associated neuroimaging measures from multi-subject functional magnetic resonance imaging (fMRI) data. However, the inherent noise in fMRI data can adversely affect the performance of ICA, often leading to noisy FNs and hindering the identification of network-level biomarkers. To address this challenge, we propose a novel method called group information-guided smooth independent component analysis (GIG-sICA). Our method effectively generates smoother functional networks with reduced noise and enhanced functional coherence, while preserving intra-subject independence and inter-subject correspondence of FN. Importantly, GIG-sICA is capable of handling different types of noise either separately or in combination. To validate the efficacy of our approach, we conducted comprehensive experiments, comparing GIG-sICA with traditional group ICA methods on both simulated and real fMRI datasets. Experiments on five simulated datasets, generated by adding various types of noise, demonstrate that GIG-sICA produces smoother functional networks with enhanced spatial accuracy. Additionally, experiments on real fMRI data from 137 schizophrenia patients and 144 healthy controls demonstrate that GIG-sICA more effectively captures functionally meaningful brain networks and reveals clearer group differences. Overall, GIG-sICA produces smooth and precise network estimations, supporting the discovery of robust biomarkers at the network level for neuroscience research.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144663895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pretraining-based Relevance-aware Visit Similarity Network for Drug Recommendation. 基于预训练的药物推荐关联感知访问相似度网络。
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2025-07-18 DOI: 10.1109/JBHI.2025.3590391
Yichen He, Shoubin Dong, Yuchen Lin, Xiaorou Zheng, Jinlong Hu
{"title":"Pretraining-based Relevance-aware Visit Similarity Network for Drug Recommendation.","authors":"Yichen He, Shoubin Dong, Yuchen Lin, Xiaorou Zheng, Jinlong Hu","doi":"10.1109/JBHI.2025.3590391","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3590391","url":null,"abstract":"<p><p>Drug recommendation based on electronic health records (EHR) relies heavily on precise patient modeling, which is more complex than conventional recommendation tasks as it requires both temporal modeling of disease progression and referencing similar patients' medication information. However, sparse visit records and vague patient similarity in EHR data pose significant challenges, often introducing noise and ambiguity. To address the above challenges, we propose RaVSNet (Relevance aware Visit Similarity Network), which improves drug recommendation by leveraging both longitudinal and transversal visit similarity and integrating medical relevance knowledge. RaVSNet utilizes multi-dimensional visit information similar to the patient's current visit as a reference, and employs a relevance-aware network to explicitly model the matching relationships between medical conditions and medications. Additionally, RaVSNet designs a general pretraining framework specifically for drug recommendation, including two tasks, Medication Sequence Reconstruction (MSR) and Causal Effect Inference (CEI), to discover the deep connections between medical information and medications. Experimental results on two public EHR datasets, MIMIC-III and MIMIC-IV demonstrate that the proposed algorithm outperforms state-of-the-art methods, yielding more accurate drug recommendation combinations, and the proposed general pretraining framework can be seamlessly integrated into most drug recommendation methods to achieve performance improvements. The implementation is available at: https://github.com/SCUT-CCNL/RaVSNet.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144663896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ToothAxis: Generalizable Tooth Axis Estimation Network from CBCT or IOS Models. 牙轴:基于CBCT或IOS模型的可推广的牙轴估计网络。
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2025-07-17 DOI: 10.1109/JBHI.2025.3590210
Nan Bao, Qingyao Luo, Jiamin Wu, Zhiming Cui, Yue Zhao
{"title":"ToothAxis: Generalizable Tooth Axis Estimation Network from CBCT or IOS Models.","authors":"Nan Bao, Qingyao Luo, Jiamin Wu, Zhiming Cui, Yue Zhao","doi":"10.1109/JBHI.2025.3590210","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3590210","url":null,"abstract":"<p><p>Tooth axes, indicating the orientation of teeth, are crucial in orthodontics and dental implants. The precise and automated estimation of tooth axes in 3D dental models is of significant importance. In clinical settings, Cone-beam computed tomography (CBCT) images and intraoral scanning (IOS) models are the two primary forms of digital data, providing 3D volumetric and surface information of the oral cavity, respectively. However, the detection of tooth axes remains largely manual annotation due to the complexities associated with geometric definitions and the variations among different tooth types and individuals. In this paper, we propose a novel two-stage network, named ToothAxis, for tooth axis estimation using either CBCT or IOS models. Given that IOS models only capture the tooth crown surface and lack information about the tooth roots, we initially employ an implicit-function tooth completion module for 3D tooth completion in the first stage. Subsequently, with the 3D tooth models segmented from CBCT images or completed from IOS models, a point-wise offset-based module is proposed in the second stage to accurately estimate the tooth axes. This design aims to encode tooth orientation into a dense representation, which is better suited for sparse information regression tasks, such as tooth axis estimation. Additionally, we incorporate a class-specific feature attention module to integrate global context representation, thereby enhancing robustness in managing diverse tooth shapes. We evaluated ToothAxis on a dataset obtained from real-world dental clinics, comprising 529 tooth models with corresponding CBCT images and paired IOS models. Finally, the ToothAxis achieves angle errors of LA ($2.921^{circ }$), PSA ($4.801^{circ }$), and LSA ($5.074^{circ }$) on tooth models extracted from CBCT images, and LA ($5.326^{circ }$), PSA ($6.360^{circ }$), and LSA ($6.520^{circ }$) on partial crowns extracted from IOS models. Extensive evaluations, ablation studies, and comparative analyses demonstrate that our method achieves accurate tooth axis estimations and surpasses state-of-the-art approaches.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144659052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
M4CEA: A Knowledge-guided Foundation Model for Childhood Epilepsy Analysis. M4CEA:儿童癫痫分析的知识导向基础模型。
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2025-07-17 DOI: 10.1109/JBHI.2025.3590463
Yuanmeng Feng, Dinghan Hu, Tiejia Jiang, Feng Gao, Jiuwen Cao
{"title":"M4CEA: A Knowledge-guided Foundation Model for Childhood Epilepsy Analysis.","authors":"Yuanmeng Feng, Dinghan Hu, Tiejia Jiang, Feng Gao, Jiuwen Cao","doi":"10.1109/JBHI.2025.3590463","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3590463","url":null,"abstract":"<p><p>Existing electroencephalogram (EEG)-based deep learning models are mainly designed for single or several specific tasks in childhood epilepsy analysis, which limits the perceptual capabilities and generalisability of the model. Recently, Foundation Models (FMs) achieved significant success in medical analysis, motivating us to explore the capability of FMs in childhood epilepsy analysis. The objective is to construct a FM with strong generalization capability on multi-tasking childhood epilepsy analysis. To this end, we propose a knowledge-guided foundation model for childhood epilepsy analysis (M4CEA) in this paper. The main contributions of the M4CEA are using the knowledge-guided mask strategy and the temporal embedding of the temporal encoder, which allow the model to effectively capture multi-domain representations of childhood EEG signals. Through pre-training on an EEG dataset with more than 1,000 hours childhood EEG recording, and performance fine-tuning, the developed M4CEA model can achieve promising performance on 8 downstream tasks in childhood epilepsy analysis, including artifact detection, onset detection, seizure type classification, childhood epilepsy syndrome classification, hypoxic-ischaemic encephalopathy (HIE) grading, sleep stage classification, epileptiform activity detection and spike-wave index (SWI) quantification. Taking HUH (Helsinki University Hospital) seizure detection task as an example, our model shows 9.42% improvement over LaBraM (a state-of-the-art Large Brain foundation Model for EEG analysis) in Balanced Accuracy. The source code and pre-trained weight are available at: https://github.com/Evigouse/M4CEA Project.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144659051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VGRF Signal-Based Gait Analysis for Parkinson's Disease Detection: A Multi-Scale Directed Graph Neural Network Approach. 基于VGRF信号的帕金森病检测步态分析:多尺度有向图神经网络方法。
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2025-07-16 DOI: 10.1109/JBHI.2025.3589772
Xiaotian Wang, Xuanhang Xu, Zhifu Zhao, Fu Li, Fei Qi, Shuo Liang
{"title":"VGRF Signal-Based Gait Analysis for Parkinson's Disease Detection: A Multi-Scale Directed Graph Neural Network Approach.","authors":"Xiaotian Wang, Xuanhang Xu, Zhifu Zhao, Fu Li, Fei Qi, Shuo Liang","doi":"10.1109/JBHI.2025.3589772","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3589772","url":null,"abstract":"<p><p>Parkinson's Disease (PD) is often characterized by abnormal gait patterns, which can be objectively and quantitatively diagnosed using Vertical Ground Reaction Force (VGRF) signals. Previous studies have demonstrated the effectiveness of deep learning in VGRF signal analysis. However, the inherent graph structure of VGRF signals has not been adequately considered, limiting the representation of dynamic gait characteristics. To address this, we propose a Multi-Scale Adaptive Directed Graph Neural Network (MS-ADGNN) approach to distinguish the gaits between Parkinson's patients and healthy controls. This method models the VGRF signal as a multi-scale directed graph, capturing the distribution relationships within the plantar sensors and the dynamic pressure conduction during walking. MS-ADGNN integrates an Adaptive Directed Graph Network (ADGN) unit and a Multi-Scale Temporal Convolutional Network (MSTCN) unit. ADGN extracts spatial features from three scales of the directed graph, effectively capturing local and global connectivity. MSTCN extracts multi-scale temporal features, capturing short to long-term dependencies. The proposed method outperforms existing methods on three widely used datasets. In cross-dataset experiments, the average improvements in terms of accuracy, F1-score, and geometric mean are 2.46$%$, 1.25$%$, and 1.11$%$ respectively. Meanwhile, in 10-fold cross-validation experiments, the improvements are 0.78$%$, 0.83$%$, and 0.81$%$ respectively.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144649375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-View Fused Nonnegative Matrix Completion Methods for Drug-Target Interaction Prediction. 药物-靶点相互作用预测的多视图融合非负矩阵补全方法。
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2025-07-16 DOI: 10.1109/JBHI.2025.3589662
Ting Li, Chuanqi Lao, Zhao Li, Hongyang Chen
{"title":"Multi-View Fused Nonnegative Matrix Completion Methods for Drug-Target Interaction Prediction.","authors":"Ting Li, Chuanqi Lao, Zhao Li, Hongyang Chen","doi":"10.1109/JBHI.2025.3589662","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3589662","url":null,"abstract":"<p><p>Accurate prediction of drug-target interactions (DTIs) is crucial for accelerating drug discovery and reducing experimental costs. However, challenges such as sparse interactions and heterogeneous datasets complicate this prediction. In this study, we hypothesize that leveraging nonnegative matrix completion and integrating heterogeneous similarity information from multiple biological views can improve the accuracy, interpretability, and scalability of DTI prediction. To validate this, we propose two multi-view fused nonnegative matrix completion methods that combine three key components: (1) a nonnegative matrix completion framework that avoids heuristic rank selection and ensures biologically interpretable predictions; (2) a linear multi-view fusion mechanism, where weights over multiple drug and target similarity matrices are jointly learned through linearly constrained quadratic programming; and (3) multi-graph Laplacian regularization to preserve structural properties within each view. The optimization is performed using two efficient proximal linearization-incorporated block coordinate descent algorithms. Extensive experiments on four gold-standard datasets and a larger real-world dataset demonstrate that our models consistently outperform state-of-the-art single-view, multi-view and deep learning-based DTI prediction methods. Furthermore, ablation studies confirm the contribution of each model component, and scalability analysis highlights the computational efficiency of our approach.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144649373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Remote PPG Measurement Using a Synergistic Time-Frequency Network. 基于时频协同网络的PPG远程测量。
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2025-07-16 DOI: 10.1109/JBHI.2025.3589712
Yiming Li, Qinglin He, Yihan Yang, Yuguang Chu, Yuanhui Hu, Zhe Wu, Xiaokai Bai, Xiaohan Zhang, Weichen Liu, Hui-Liang Shen
{"title":"Remote PPG Measurement Using a Synergistic Time-Frequency Network.","authors":"Yiming Li, Qinglin He, Yihan Yang, Yuguang Chu, Yuanhui Hu, Zhe Wu, Xiaokai Bai, Xiaohan Zhang, Weichen Liu, Hui-Liang Shen","doi":"10.1109/JBHI.2025.3589712","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3589712","url":null,"abstract":"<p><p>Remote photoplethysmography (rPPG) aims to estimate the blood volume pulse (BVP) signal from facial videos. Existing rPPG approaches still suffer from limitations. We attribute this issue to two primary problems: (1) the reliance solely on time-domain processing that makes the signal susceptible to interference, and (2) the presence of a phase discrepancy between the supervision signal and the ground-truth PPG. To address these problems, we propose TFSNet, a novel time-frequency synergy network for rPPG signal estimation and heart rate prediction. Specifically, we leverage time-frequency fusion (TFF) module, which integrates frequency-domain information into the learning process to enrich the feature representations. Additionally, we introduce the amplitude-phase decoupling (APD) module, which apply phase compensation in frequency domain to mitigate the adverse effects of incorrect phase supervision. Extensive experiments demonstrate that TFSNet achieves state-of-the-art performance, significantly outperforming current approaches in both accuracy and robustness.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144649374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FE-SpikeFormer: A Camera-Based Facial Expression Recognition Method for Hospital Health Monitoring. FE-SpikeFormer:一种基于摄像头的医院健康监测面部表情识别方法。
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2025-07-15 DOI: 10.1109/JBHI.2025.3589267
Zhekang Dong, Liyan Zhu, Shiqi Zhou, Xiaoyue Ji, Chun Sing Lai, Minjiang Chen, Jiansong Ji
{"title":"FE-SpikeFormer: A Camera-Based Facial Expression Recognition Method for Hospital Health Monitoring.","authors":"Zhekang Dong, Liyan Zhu, Shiqi Zhou, Xiaoyue Ji, Chun Sing Lai, Minjiang Chen, Jiansong Ji","doi":"10.1109/JBHI.2025.3589267","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3589267","url":null,"abstract":"<p><p>Facial expression recognition has emerged as a critical research area in health monitoring, enabling healthcare professionals to assess patients' emotional and psychological states for timely intervention and personalized care. However, existing methods often struggle to balance computational accuracy with energy efficiency. To address this challenge, this paper proposes FE-SpikeFormer - a high-accuracy, low-energy, and deployment-friendly Spiking Neural Network (SNN) for facial emotion recognition. The proposed architecture comprises three key components: the initial convolution module, the spiking extraction block, and the spiking integration block. These three modules collectively support detailed and contextual feature extraction, promote spatial feature integration, and strengthen the representational capacity of spiking signals. Meanwhile, a joint verification is conducted in both controlled laboratory settings and real-world hospital scenarios. Experimental results demonstrate that FE-SpikeFormer achieves top-three recognition accuracy among state-of-the-art methods, while utilizing only 6.93 million parameters. Moreover, it exhibits strong robustness against various noise conditions, underscoring its potential for practical deployment in healthcare environments.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144642430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信