Ziyang Song, Qincheng Lu, He Zhu, David Buckeridge, Yue Li
{"title":"TrajGPT: Irregular Time-Series Representation Learning of Health Trajectory.","authors":"Ziyang Song, Qincheng Lu, He Zhu, David Buckeridge, Yue Li","doi":"10.1109/JBHI.2025.3620205","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3620205","url":null,"abstract":"<p><p>In the healthcare domain, time-series data are often irregularly sampled with varying intervals through outpatient visits, posing challenges for existing models designed for equally spaced sequential data. To address this, we propose Trajectory Generative Pre-trained Transformer (TrajGPT) for representation learning on irregularly-sampled healthcare time series. TrajGPT introduces a novel Selective Recurrent Attention (SRA) module that leverages a data-dependent decay to adaptively filter irrelevant past information. As a discretized ordinary differential equation (ODE) framework, TrajGPT captures underlying continuous dynamics and enables a time-specific inference for forecasting arbitrary target timesteps without auto-regressive prediction. Experimental results based on the longitudinal EHR data PopHR from Montreal health system and eICU from PhysioNet showcase TrajGPT's superior zero-shot performance in disease forecasting, drug usage prediction, and sepsis detection. The inferred trajectories of diabetic and cardiac patients reveal meaningful comorbidity conditions, underscoring TrajGPT as a useful tool for forecasting patient health evolution.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145286040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haoyun Zhao, Dapeng Tao, Yibing Zhan, Jun Ni, Yang Chen
{"title":"CPGNet: Multimodal Graph Learning with Hierarchical Category Guidance for Multi-Label Whole Slide Image Classification.","authors":"Haoyun Zhao, Dapeng Tao, Yibing Zhan, Jun Ni, Yang Chen","doi":"10.1109/JBHI.2025.3620443","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3620443","url":null,"abstract":"<p><p>The analysis of WSI categories in digital pathology is critical for clinician decision making regarding the diagnosis, treatment, and prognosis of cancer patients. However, current automated methods for cancer type identification are predominantly formulated as single-label classification problems. These methods typically rely on datasets with relatively balanced and abundant samples, where each WSI belongs to a single category. This approach does not fully align with real-world clinical scenarios, where cancer subtypes often exhibit multi-label characteristics and class imbalance, posing significant challenges. To address this issue, this paper proposes CPGNet, a category-prompted graph network designed as a multi-label WSI classifier better suited for clinical applications. CPGNet employs the MaskSLIC algorithm for superpixel segmentation of WSIs, effectively capturing the nonlinear spatial distribution of cellular and tissue structures. The segmented superpixels are then encoded as graph nodes with their corresponding features, while edges and edge features are constructed to abstractly model WSIs as graphs. Furthermore, the method introduces a GLGFI module, which aggregates features from neighboring nodes and edges via a GNN to capture local information, while simultaneously leveraging a multi-head self-attention mechanism to model global dependencies, mimicking the diagnostic behavior of pathologists. Additionally, a VCI module exploits semantic relationships between categories to guide visual feature classification, providing supplementary cues for accurate predictions. To enhance the model's focus on hard-to-classify positive samples, we also implement a reweighting strategy. The proposed approach is evaluated on a private dataset (YNLUAD) and two public challenge datasets (BCNB and AGGC22). The experimental results demonstrate the superiority, universality, and robustness of CPGNet. The code is available at https://github.com/zhy1312/CPGNet.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145285986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pengfei Liang, Yanwei Du, Zijian Qiao, Suiyan Wang
{"title":"CFTResNet: A novel cross-domain diagnosis framework guided by interpretability for cardiovascular diseases.","authors":"Pengfei Liang, Yanwei Du, Zijian Qiao, Suiyan Wang","doi":"10.1109/JBHI.2025.3620820","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3620820","url":null,"abstract":"<p><p>cardiovascular diseases (CVDs) are the leading cause of mortality worldwide. While deep learning (DL) has shown potential in automated CVDs diagnosis, domain shifts due to variations in acquisition devices and environments hinder generalization and reliability. This paper proposes an interpretable cross-domain diagnostic framework, named CFTResNet, to mitigate domain shifts and enhance diagnostic interpretability. In contrast to traditional transfer learning methods that typically fine-tune fully connected layers (FC), the proposed CFTResNet uses a strategy called Module Robustness Criticality (MRC) to evaluate which parts of the pre-trained model are weak in robustness and then fine-tunes only those specific weak modules instead of adjusting the entire model, thus enhancing adaptability and interpretability. Additionally, to enhance feature representation, we integrate a Temporal-Channel Fusion Module (TCFM) with the ResNet architecture, which effectively captures characteristic information of different channels from heart sound (HS) signals, enhancing the model's capability to discern subtle pathological patterns in cardiac auscultation. Experiments on two public HS datasets demonstrate that CFTResNet outperforms conventional methods in diagnostic accuracy, interpretability, and cross-domain generalization. Highlighting its potential as a reliable AI-assisted (artificial intelligence assisted) tool for clinical CVDs diagnosis.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145285957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"HighMPNN: A Graph Neural Network Approach for Structure-Constrained Cyclic Peptide Sequence Design.","authors":"Wen Xu, Chengyun Zhang, Tianfeng Shang, Qingyi Mao, Jingjing Guo, Hongliang Duan","doi":"10.1109/JBHI.2025.3620163","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3620163","url":null,"abstract":"<p><p>Cyclic peptides become attractive therapeutic candidates due to their diverse biological activities. However, existing deep learning-based sequence design models, such as ProteinMPNN, are primarily intended for linear peptides or proteins and do not explicitly account for the unique topological constraints of cyclic peptides. In this study, we introduce HighMPNN, a graph neural network model specifically developed for cyclic peptide sequence design. Through the integration of explicit structural constraints into the GNN-based framework, HighMPNN captures the geometric features of cyclic backbones while learning sequence patterns. The combination of cross-entropy loss with Frame Aligned Point Error (FAPE) loss allows the model to simultaneously optimize sequence generation and enhance structural accuracy. HighMPNN demonstrates superior performance in both sequence recovery rate and structural consistency compared to baseline models, achieving an average sequence recovery rate of 63.95% and an average Cα root-mean-square deviation (RMSD_Cα) of 1.413 Å. These results highlight the model's ability to generate sequences that closely resemble native backbones. At present, HighMPNN is limited to natural amino acids. Future work will focus on extending the framework to support non-canonical residues and structurally diverse cyclic peptide scaffolds, thereby accelerating cyclic peptide discovery and advancing peptide-based drug development.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145285966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PathBot: A Foundation Model for Pathological Image Analysis.","authors":"Mengkang Lu, Tianyi Wang, Qingjie Zeng, Zilin Lu, Zhe Li, Yong Xia","doi":"10.1109/JBHI.2025.3619967","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3619967","url":null,"abstract":"<p><p>Computational pathology has emerged as a transformative paradigm by leveraging artificial intelligence to automate and enhance diagnostic procedures. However, existing models often target narrow tasks or specific tumor types, missing opportunities to unify diverse datasets and tasks through joint learning. In this work, we introduce PathBot, a foundation model tailored for comprehensive pathological image analysis. Central to PathBot is a ViTGiant encoder with one billion parameters, the largest model to date trained on publicly available pathological data. We pre-train this encoder using a novel Masked Distillation Network (MDN) and an integrated learning strategy that combines contrastive and generative objectives. The pretraining leverages over 30 million image patches derived from 11,765 whole slide images (WSIs) across 32 cancer types in the Cancer Genome Atlas (TCGA). To evaluate its versatility, we pair the encoder with task-specific decoders for segmentation, detection, classification, and regression. Extensive experiments across 20 downstream tasks demonstrate that PathBot achieves state-of-the-art performance in most cases, showcasing its robustness and generalizability. Code and models will be released to support further research.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145274493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinyu Li, Hulin Kuang, Jin Liu, Lanlan Wang, Pengcheng Shu, Mengshen He, Jianxin Wang
{"title":"Customized SAM-Med3D with Multi-view Representation Fusion and Age-Grade Stratified Loss for Glioma Survival Risk Prediction.","authors":"Xinyu Li, Hulin Kuang, Jin Liu, Lanlan Wang, Pengcheng Shu, Mengshen He, Jianxin Wang","doi":"10.1109/JBHI.2025.3619935","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3619935","url":null,"abstract":"<p><p>Survival risk prediction is crucial for personalized treatment of gliomas. Medical image foundational models can explore complex medical features, which are critical for prognosis in gliomas. We propose SAM-Risk, which uses a customized SAM-Med3D with multi-view representation fusion and clinical knowledge-based age-grade stratified loss for glioma survival risk prediction. First, to utilize potential interactions between multiple views at an early stage, we design a 3D representation generation module that transforms 1D handcrafted radiomics and clinical features into 3D representations, which are fused with multimodal MRIs through a multi-view representation fusion module. The fused representation is fed into the customized SAM-Med3D, fine-tuned using LoRA and a disparity function to extract survival risk-related features. We design a feature refinement module to explore the inter-channel relationships among the outputs of the fine-tuned SAM-Med3D. Additionally, we propose an age-grade stratified loss based on glioma prognosis standards to make the predicted risk more consistent with clinical prior knowledge. Validated on two publicly available UCSF-PDGM and BraTS2020 datasets, SAM-Risk achieves a C-index of 75.08% and 73.67%, respectively, outperforming several survival risk prediction methods.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145274422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guangjie Chen, Zenan Fu, Yetong Sha, Di Xiong, Lei Zhang, Hao Wu, Aiguo Song
{"title":"Optimizing Accuracy-Efficiency Trade-Offs of On-Device Activity Inference with Star Operation.","authors":"Guangjie Chen, Zenan Fu, Yetong Sha, Di Xiong, Lei Zhang, Hao Wu, Aiguo Song","doi":"10.1109/JBHI.2025.3619549","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3619549","url":null,"abstract":"<p><p>Lightweight convolution-based neural networks (CNNs) are well suited for sensor-based human activity recognition (HAR) applications on resource-constrained edge devices with faster inference speed. However, the convolutional kernels are often limited to a small window range, which can only capture local details in time series sensor data, thus preventing further performance boost. Though Introducing self-attention into convolution can help to handle long-range dependence well, it might significantly slow down actual activity inference speed, due to high computational cost. In this paper, we introduce a new learning paradigm (star operation) and then present a lightweight Dual-Branch High-Order Interactions (DbHoi) block, which is computationally friendly for mobile HAR deployment. The proposed DbHoi block may implicitly transform raw sensor inputs into high-dimensional non-linear features, but actually operate in a low-dimensional feature space (analogs to the design principle of polynomial kernel tricks), without incurring extra computational overhead. Extensive experiments are conducted on three public HAR benchmarks including UCI-HAR, UniMiB-SHAR, and OPPORTUNITY, which demonstrate that our suggested DbHoi can consistently surpass various meticulously designed lightweight networks such as MobileNet, ShuffleNet, and GhostNet. Detailed ablation studies, visualizing representations, and on-device latency analyses further validate our insights with regards to the star operation, while underscoring its practical merit in real-world HAR deployment.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145274470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AMLPF-CLIP: Adaptive Prompting and Distilled Learning for Imbalanced Histopathological Image Classification.","authors":"Xizhang Yao, Guanghui Yue, Jeremiah D Deng, Hanhe Lin, Wei Zhou","doi":"10.1109/JBHI.2025.3619343","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3619343","url":null,"abstract":"<p><p>Histopathological image classification (HIC) plays a pivotal role in computer-aided diagnosis, enabling lesion characterization (e.g., tumor grading) and survival outcome prediction. Despite recent advances in HIC, existing methods still face challenges in integrating domain-specific knowledge, addressing class imbalance, and ensuring computational efficiency. To address these challenges, we propose AMLPF-CLIP, an enhanced CLIP-based framework for HIC featuring three key innovations. First, we introduce an Adaptive Multi-Level Prompt Fusion (AMLPF) strategy that leverages three levels of textual prompts: class labels, basic descriptions, and GPT-4o-generated detailed pathological features for enhanced semantic representation and cross-modal alignment. Second, we design a class-balanced resampling method that dynamically adjusts sampling weights based on both data imbalance and classification performance, targeting underrepresented, low-confidence classes. Third, we develop a Knowledge Distillation (KD) technique that leverages output-level alignment via L2 loss, transferring knowledge from a large Vision Transformer (ViT-L/16) to a lightweight ResNet-50-based CLIP model. Extensive experiments on three public datasets demonstrate that AMLPF-CLIP consistently outperforms eleven state-of-the-art methods, achieving accuracy improvements of 1.19% on Chaoyang, 2.64% on BreaKHis, and 0.90% on LungHist700. AMLFP-CLIP also demonstrates improved robustness and efficiency, highlighting its practical applicability.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145258114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yu Ouyang, Wenjie Cheng, Lizhi Wang, Xiaoya Zhu, Hong Zeng
{"title":"P3DL: A Privacy Preserving Personalized Distributed Learning Framework for EEG-based Cognitive State Identification.","authors":"Yu Ouyang, Wenjie Cheng, Lizhi Wang, Xiaoya Zhu, Hong Zeng","doi":"10.1109/JBHI.2025.3619419","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3619419","url":null,"abstract":"<p><p>Electroencephalography (EEG)-based brain cognitive state identification for the elderly allows timely detection and early intervention of cognitive deterioration. Notably, EEG signals carry a great deal of vital personal information. However, a majority of the existing cognitive evaluations focus on improving the accuracy of EEG decoding and enhancing the performance of identification models, while neglecting the privacy protection of EEG data. To address the risky challenge, we propose a privacy-preserving personalized distributed learning framework (P3DL) for cognitive state identification. Specifically, it consists of the clients and a central server. Each client contains a cognitive model and a score model for identifying cognitive states and quantifying cognitive levels, respectively. The central server can aggregate local models' parameters from distributed clients, then, update and downstream the global model's parameters for iterative optimization. A federated dynamic update strategy (FedDBS) is designed to jointly update all global and local models with a supervisory metric. In order to further improve the identification performance and judge the misdiagnosis level, a novel loss function, extreme error Loss (E2Loss), is proposed. Compared with the baseline, experimental results on our self-collected clinical dataset and a public dataset show an average increase in F2Score of 5.58% and 3.31%, and in accuracy of 1.78% and 2.46%, respectively. Furthermore, the scalability of the framework has been proved in the emotion recognition task. Our proposed framework P3DL can not only improve the identification performance, but also protect the privacy of EEG, opening a new window for secure healthcare.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145258102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hania Tourab, Laura Lopez-Perez, Pena Arroyo-Gallego, Eleni Georga, Miguel Rujas, Francesca Romana Ponziani, Macarena Torrego-Ellacuria, Beatriz Merino-Barbancho, Neri Niccolo, Gastone Ciuti, Dimitrios Fotiadis, Gasbarrini Antonio, Maria Fernanda Cabrera, Maria Teresa Arredondo, Giuseppe Fico
{"title":"The Use of Machine Learning and Explainable Artificial Intelligence in Gut Microbiome Research: A Scoping Review.","authors":"Hania Tourab, Laura Lopez-Perez, Pena Arroyo-Gallego, Eleni Georga, Miguel Rujas, Francesca Romana Ponziani, Macarena Torrego-Ellacuria, Beatriz Merino-Barbancho, Neri Niccolo, Gastone Ciuti, Dimitrios Fotiadis, Gasbarrini Antonio, Maria Fernanda Cabrera, Maria Teresa Arredondo, Giuseppe Fico","doi":"10.1109/JBHI.2025.3593198","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3593198","url":null,"abstract":"<p><p>Gut microbiome research has made tremendous progress, especially with the integration of machine learning and artificial intelligence that can provide new insights from complex microbiome data and its impact on human health. The use of explainable artificial intelligence is becoming critical in medicine and adopting it in precision medicine-models leveraging gut microbiome data is appealing for providing more transparency and trustworthiness in clinical research. This scoping review evaluates the use of machine learning and explainable artificial intelligence techniques and identifies existing gaps in knowledge in this research area to suggest future research directions. Online databases (PubMed and Scopus) were searched to retrieve papers published between 2018-2024, and from which we selected 76 publications. Different clinical applications of machine learning and artificial intelligence techniques in gut microbiome studies were explored in the reviewed articles. We observed a high prevalence in the use of black box models in the field, with Random Forest being the most used algorithm. The explainability remains somewhat limited in the field, but it appears to be improving. Researchers showed interest in SHAP applications as an explainable technique. Finally, not enough attention was paid to the reproducibility of the research work published. This review highlights opportunities for advancing research on explainable artificial intelligence models in the field of microbiome, supporting future applications of microbiome-based precision medicine.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145258154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}