Salabat Khan, Mansoor Khan, Muhammad Asghar Khan, Muhammad Attique Khan, Lu Wang, Kaishun Wu
{"title":"A Blockchain-Enabled AI-Driven Secure Searchable Encryption Framework for Medical IoT Systems.","authors":"Salabat Khan, Mansoor Khan, Muhammad Asghar Khan, Muhammad Attique Khan, Lu Wang, Kaishun Wu","doi":"10.1109/JBHI.2025.3538623","DOIUrl":"10.1109/JBHI.2025.3538623","url":null,"abstract":"<p><p>Blockchain technology is widely adopted in the Internet of Medical Things (IoMT) for information storage and retrieval. The integration of blockchain with IoMT systems enhances security; however, it raises privacy and security in data searching and storage. This study proposes a novel Binary Spring Search (BSS) technique based on group theory and integrated with a hybrid deep neural network approach to enhance the security and trustworthiness of IoMT. The proposed method incorporates secure key revocation and dynamic policy updates. The proposed framework leverages blockchain technology for immutable and decentralized data management, Artificial Intelligence (AI) for dynamic data analysis and threat detection, and advanced searchable encryption techniques to facilitate secure and efficient data queries. The proposed patient-centered data access model that combines blockchain technology with trust chains makes our method safer and more efficient and demonstrates a return on investment. Furthermore, our blockchain-based architecture ensures the integrity and immutability of medical data generated by IoMT devices, allowing for decentralized and tamper-proof storage. We used the hyper-ledger fabric tool, known as OrigionLab, for simulations in a blockchain context. We claim that the suggested framework provides a more searchable and secure solution to the healthcare system when compared to the other methods given through our findings. The simulation results show that our algorithm significantly reduces transaction time while maintaining high levels of security, making it a robust solution for managing Patient Health Records (PHR) in a decentralized manner.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143572933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jinghua Zhang, Chen Li, Marco Cristani, Hongzan Sun, Marcin Grzegorzek, Huiling Chen
{"title":"WP-FSCIL: A Well-Prepared Few-shot Class-incremental Learning Framework for Pill Recognition.","authors":"Jinghua Zhang, Chen Li, Marco Cristani, Hongzan Sun, Marcin Grzegorzek, Huiling Chen","doi":"10.1109/JBHI.2025.3548691","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3548691","url":null,"abstract":"<p><p>Few-shot Class-incremental Pill Recognition (FSCIPR) aims to develop an automatic pill recognition system that requires only a few training data and can continuously adapt to new classes, providing technical support for applications in hospitals, portable apps, and assistance for visually impaired individuals. This task faces three core challenges: overfitting, fine-grained classification problems, and catastrophic forgetting. We propose the Well-Prepared Few-shot Class-incremental Learning (WP-FSCIL) framework, which addresses overfitting through a parameter-freezing strategy, enhances the robustness and discriminative power of backbone features with Center-Triplet (CT) loss and supervised contrastive loss for fine-grained classification, and alleviates catastrophic forgetting using a multi-dimensional Knowledge Distillation (KD) strategy based on flexible Pseudo-feature Synthesis (PFS). By flexibly synthesizing any number of old-class features, the PFS strategy resolves the issue of insufficient samples in the KD process, enabling Response-based KD (KD1) and Relation-based KD (KD2) to comprehensively preserve old knowledge. The effectiveness of WP-FSCIL has been validated through experiments conducted on two publicly available pill datasets. These experiments show that WP-FSCIL outperforms existing state-of-the-art methods, demonstrating its superior performance.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143572939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Journal of Biomedical and Health Informatics Information for Authors","authors":"","doi":"10.1109/JBHI.2025.3541766","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3541766","url":null,"abstract":"","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"29 3","pages":"C3-C3"},"PeriodicalIF":6.7,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10916526","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143564094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fluid Intake Action Detection Based on Egocentric Videos and YOLOv8 Models.","authors":"Xin Chen, Xinqi Bao, Ernest Kamavuako","doi":"10.1109/JBHI.2025.3548512","DOIUrl":"10.1109/JBHI.2025.3548512","url":null,"abstract":"<p><p>Dehydration in older adults poses significant health risks, requiring effective monitoring solutions. This study addresses the challenge of detecting fluid intake accurately using a first-person, vision-based approach with wearable cameras and advanced object detection models. We developed a comprehensive dataset comprising 17 hours of drinking footage (∼3100 events) and 15 hours of nondrinking activities (∼3600 events) recorded as interference, from 36 participants, collected between October 2022 and January 2023 at King's College London. We include various container types and daily activities to enhance the model's robustness and generalizability. YOLOv8 models were used to detect drinking-related objects, and a mechanism was developed to analyse the size and position of the detection output to identify hand-container interactions and movements. The models achieved mAP@50 over 0.97 and F1-score over 0.95 in detecting drinking-related objects. Action detection testing results from video streams demonstrated an F1-score of 0.917, which dropped to 0.863 when interference activities were added. Additionally, the model detected the start of drinking activities with an average latency of 0.24 seconds and the end with 0.04 seconds, indicating high temporal accuracy. These results demonstrate the feasibility of egocentric, vision-based fluidintake detection and its potential application in preventing dehydration. To our knowledge, this is the first vision-based dataset focusing on fluid-intake actions from a first-person viewpoint-offering a novel foundation for advancing hydration monitoring in older adults and various real-world contexts.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143572936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lanqing Liu, Jing Zou, Cheng Xu, Kang Wang, Jun Lyu, Xuemiao Xu, Zhanli Hu, Jing Qin
{"title":"IM-Diff: Implicit Multi-Contrast Diffusion Model for Arbitrary Scale MRI Super-Resolution.","authors":"Lanqing Liu, Jing Zou, Cheng Xu, Kang Wang, Jun Lyu, Xuemiao Xu, Zhanli Hu, Jing Qin","doi":"10.1109/JBHI.2025.3544265","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3544265","url":null,"abstract":"<p><p>Diffusion models have garnered significant attention for MRI Super-Resolution (SR) and have achieved promising results. However, existing diffusion-based SR models face two formidable challenges: 1) insufficient exploitation of complementary information from multi-contrast images, which hinders the faithful reconstruction of texture details and anatomical structures; and 2) reliance on fixed magnification factors, such as 2× or 4×, which is impractical for clinical scenarios that require arbitrary scale magnification. To circumvent these issues, this paper introduces IM-Diff, an implicit multi-contrast diffusion model for arbitrary-scale MRI SR, leveraging the merits of both multi-contrast information and the continuous nature of implicit neural representation (INR). Firstly, we propose an innovative hierarchical multi-contrast fusion (HMF) module with reference-aware cross Mamba (RCM) to effectively incorporate target-relevant information from the reference image into the target image, while ensuring a substantial receptive field with computational efficiency. Secondly, we introduce multiple wavelet INR magnification (WINRM) modules into the denoising process by integrating the wavelet implicit neural non-linearity, enabling effective learning of continuous representations of MR images. The involved wavelet activation enhances space-frequency concentration, further bolstering representation accuracy and robustness in INR. Extensive experiments on three public datasets demonstrate the superiority of our method over existing state-of-the-art SR models across various magnification factors.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143566929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zheyi Ji, Yongxin Ge, Chijioke Chukwudi, Kaicheng U, Sophia Meixuan Zhang, Yulong Peng, Junyou Zhu, Hossam Zaki, Xueling Zhang, Sen Yang, Xiyue Wang, Yijiang Chen, Junhan Zhao
{"title":"Counterfactual Bidirectional Co-Attention Transformer for Integrative Histology-Genomic Cancer Risk Stratification.","authors":"Zheyi Ji, Yongxin Ge, Chijioke Chukwudi, Kaicheng U, Sophia Meixuan Zhang, Yulong Peng, Junyou Zhu, Hossam Zaki, Xueling Zhang, Sen Yang, Xiyue Wang, Yijiang Chen, Junhan Zhao","doi":"10.1109/JBHI.2025.3548048","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3548048","url":null,"abstract":"<p><p>Applying deep learning to predict patient prognostic survival outcomes using histological whole-slide images (WSIs) and genomic data is challenging due to the morphological and transcriptomic heterogeneity present in the tumor microenvironment. Existing deep learning-enabled methods often exhibit learning biases, primarily because the genomic knowledge used to guide directional feature extraction from WSIs may be irrelevant or incomplete. This results in a suboptimal and sometimes myopic understanding of the overall pathological landscape, potentially overlooking crucial histological insights. To tackle these challenges, we propose the CounterFactual Bidirectional Co-Attention Transformer framework. By integrating a bidirectional co-attention layer, our framework fosters effective feature interactions between the genomic and histology modalities and ensures consistent identification of prognostic features from WSIs. Using counterfactual reasoning, our model utilizes causality to model unimodal and multimodal knowledge for cancer risk stratification. This approach directly addresses and reduces bias, enables the exploration of 'what-if' scenarios, and offers a deeper understanding of how different features influence survival outcomes. Our framework, validated across eight diverse cancer benchmark datasets from The Cancer Genome Atlas (TCGA), represents a major improvement over current histology-genomic model learning methods. It shows an average 2.5% improvement in c-index performance over 18 state-of-the-art models in predicting patient prognoses across eight cancer types. Our code is released at https://github.com/BusyJzy599/CFBCT-main.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143566928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dan Shao, Guangzhao Zhang, Lin Lin, Yucong Xiong, Kai He, Liyan Sun
{"title":"SecProGNN: Predicting Bronchoalveolar Lavage Fluid Secreted Protein Using Graph Neural Network.","authors":"Dan Shao, Guangzhao Zhang, Lin Lin, Yucong Xiong, Kai He, Liyan Sun","doi":"10.1109/JBHI.2025.3548263","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3548263","url":null,"abstract":"<p><p>Bronchoalveolar lavage fluid (BALF) is a liquid obtained from the alveoli and bronchi, often used to study pulmonary diseases. So far, proteomic analyses have identified over three thousand proteins in BALF. However, the comprehensive characterization of these proteins remains challenging due to their complexity and technological limitations. This paper presented a novel deep learning framework called SecProGNN, designed to predict secretory proteins in BALF. Firstly, SecProGNN represented proteins as graph-structured data, with amino acids connected based on their interactions. Then, these graphs were processed through graph neural networks (GNNs) model to extract graph features. Finally, the extracted feature vectors were fed into a multi-layer perceptron (MLP) module to predict BALF secreted proteins. Additionally, by utilizing SecProGNN, we investigated potential biomarkers for lung adenocarcinoma and identified 16 promising candidates that may be secreted into BALF.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143566930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Hierarchical Graph Convolutional Network with Infomax-Guided Graph Embedding for Population-Based ASD Detection.","authors":"Xiaoke Hao, Mingming Ma, Jiaqing Tao, Jiahui Cao, Jing Qin, Feng Liu, Daoqiang Zhang, Dong Ming","doi":"10.1109/JBHI.2025.3544302","DOIUrl":"10.1109/JBHI.2025.3544302","url":null,"abstract":"<p><p>Recently, functional magnetic resonance imaging (fMRI)-based brain networks have been shown to be an effective diagnostic tool with great potential for accurately detecting autism spectrum disorders (ASD). Meanwhile, the successful use of graph convolution networks (GCNs) methods based on fMRI information has improved the classification accuracy of ASD. However, many graph convolution-based methods do not fully utilize the topological information of the brain functional connectivity network (BFCN) or ignore the effect of non-imaging information. Therefore, we propose a hierarchical graph embedding model that leverage both the topological information of the BFCN and the non-imaging information of the subjects to improve the classification accuracy. Specifically, our model first use the Infomax Module to automatically identify embedded features in regions of interests (ROIs) in the brain. Then, these features, along with non-imaging information, is used to construct a population graph model. Finally, we design a graph convolution framework to propagate and aggregate the node features and obtain the results for ASD detection. Our model takes into account both the significance of the BFCN to individual subjects and relationships between subjects in the population graph. The model performed autism detection using the Autism Brain Imaging Data Exchange (ABIDE) dataset and obtained an average accuracy of 77.2% and an AUC of 87.2%. These results exceed those of the baseline approach. Through extensive experiments, we demonstrate the competitiveness, robustness and effectiveness of our model in aiding ASD diagnosis.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143556708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guanyu Song, Meifeng Deng, Yunzhi Chen, Shijie Jia, Zhenguo Nie
{"title":"NciaNet: A Non-Covalent Interaction-Aware Graph Neural Network for the Prediction of Protein-Ligand Interaction in Drug Discovery.","authors":"Guanyu Song, Meifeng Deng, Yunzhi Chen, Shijie Jia, Zhenguo Nie","doi":"10.1109/JBHI.2025.3547741","DOIUrl":"10.1109/JBHI.2025.3547741","url":null,"abstract":"<p><p>Precise quantification of protein-ligand interaction is critical in early-stage drug discovery. Artificial intelligence (AI) has gained massive popularity in this area, with deep-learning models used to extract features from ligand and protein molecules. However, these models often fail to capture intermolecular non-covalent interactions, the primary factor influencing binding, leading to lower accuracy and interpretability. Moreover, such models overlook the spatial structure of protein-ligand complexes, resulting in weaker generalization. To address these issues, we propose Non-covalent Interaction-aware Graph Neural Network (NciaNet), a novel method that effectively utilizes intermolecular non-covalent interactions and 3D protein-ligand structure. Our approach achieves excellent predictive performance on multiple benchmark datasets and outperforms competitive baseline models in the binding affinity task, with the benchmark core set v.2016 achieving an RMSE of 1.208 and an R of 0.833, and the core set v.2013 achieving an RMSE of 1.409 and an R of 0.805, under the high-quality refined v.2016 training conditions. Importantly, NciaNet successfully learns vital features related to protein-ligand interactions, providing biochemical insights and demonstrating practical utility and reliability. However, despite these strengths, there may still be limitations in generalizability to unseen protein-ligand complexes, suggesting potential avenues for future work.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143556725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic Local Conformal Reinforcement Network (DLCR) for Aortic Dissection Centerline Tracking.","authors":"Jingliang Zhao, An Zeng, Jiayu Ye, Dan Pan","doi":"10.1109/JBHI.2025.3547744","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3547744","url":null,"abstract":"<p><p>Pre-extracted aortic dissection (AD) centerline is very useful for quantitative diagnosis and treatment of AD disease. However, centerline extraction is challenging because (i) the lumen of AD is very narrow and irregular, yielding failure in feature extraction and interrupted topology; and (ii) the acute nature of AD requires a quick algorithm, however, AD scans usually contain thousands of slices, centerline extraction is very time-consuming. In this paper, a fast AD centerline extraction algorithm, which is based on a local conformal deep reinforced agent and dynamic tracking framework, is presented. The potential dependence of adjacent center points is utilized to form the novel 2.5D state and locally constrains the shape of the centerline, which improves overlap ratio and accuracy of the tracked path. Moreover, we dynamically modify the width and direction of the detection window to focus on vessel-relevant regions and improve the ability in tracking small vessels. On a public AD dataset that involves 100 CTA scans, the proposed method obtains average overlap of 97.23% and mean distance error of 1.28 voxels, which outperforms four state-of-the-art AD centerline extraction methods. The proposed algorithm is very fast with average processing time of 9.54s, indicating that this method is very suitable for clinical practice.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143556720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}