{"title":"scDrugLink: Single-Cell Drug Repurposing for CNS Diseases via Computationally Linking Drug Targets and Perturbation Signatures.","authors":"Li Huang, Xu Lu, Dongsheng Chen","doi":"10.1109/JBHI.2025.3552536","DOIUrl":"10.1109/JBHI.2025.3552536","url":null,"abstract":"<p><p>Central nervous system (CNS) diseases such as glioblastoma (GBM), multiple sclerosis (MS), and Alzheimer's disease (AD) remain challenging due to their complexity and limited treatments. Conventional drug repurposing strategies often rely on bulk RNA sequencing data, which can overlook cellular heterogeneity and mask rare but critical cell populations. Here, we introduce scDrugLink, a computational method that integrates single-cell transcriptomic data with drug targets and perturbation signatures to improve repurposing. For each cell type, scDrugLink constructs a Drug2Cell matrix based on drug targets to estimate promotion/inhibition scores and derives sensitivity/resistance scores by reverse matching signatures and disease-associated genes. These scores are then \"linked,\" yielding robust therapeutic rankings. In our study, we present a systematic evaluation of single-cell drug repurposing methods for CNS diseases. Applied to atlas data for GBM, MS, and AD, scDrugLink surpassed three state-of-the-art methods (ASGARD, DrugReSC, and scDrugPrio), achieving area under the receiver operating characteristic curve (AUC) ranges of 0.6286-0.7242 and area under the precision-recall curve (AUPRC) ranges of 0.3412-0.5484. It also ranked top when comparing AUC and AUPRC at the level of individual cell types. Moreover, applying the \"linking\" principle to baseline methods boosted their performance, on average improving AUC and AUPRC by 0.0160 and 0.0244, respectively. Despite the advancements, the complexity and heterogeneity of CNS diseases, along with incomplete drug data, indicate that further improvement is necessary. We discuss these challenges and suggest directions for enhancing single-cell drug repurposing in the future.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143657056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fangfang Zhu, Honghong Su, Ji Ding, Qichao Niu, Qi Zhao, Jianwei Shuai
{"title":"SAEF: Secure Anonymization and Encryption Framework for Open-Access Remote Photoplethysmography Datasets.","authors":"Fangfang Zhu, Honghong Su, Ji Ding, Qichao Niu, Qi Zhao, Jianwei Shuai","doi":"10.1109/JBHI.2025.3552455","DOIUrl":"10.1109/JBHI.2025.3552455","url":null,"abstract":"<p><p>The advancement of remote photoplethys-mography (rPPG) technology depends on the availability of comprehensive datasets. However, the reliance on facial features for rPPG signal acquisition poses significant privacy concerns, hindering the development of open-access datasets. This work establishes privacy protection principles for rPPG datasets and introduces the secure anonymization and encryption framework (SAEF) to address these challenges while preserving rPPG data integrity. SAEF first identifies privacy-sensitive facial regions for removal through importance and necessity analysis. The irreversible removal of these regions has an insignificant impact on signal quality, with an R-value deviation of less than 0.06 for BVP extraction and a mean absolute error (MAE) deviation of less than 0.05 for heart rate (HR) calculation. Additionally, SAEF introduces a high efficiency cascade key encryption method (CKEM), achieving encryption in 5.54 × 10<sup>-5</sup> seconds per frame, which is over three orders of magnitude faster than other methods, and reducing approximate point correlation (APC) values to below 0.005, approaching complete randomness. These advancements significantly improve real-time video encryption performance and security. Finally, SAEF serves as a preprocessing tool for generating volunteer-friendly, open-access rPPG datasets.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143657054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Inducing Long-Term Plastic Changes and Visual Attention Enhancement Via One-Week Cerebellar Crus II Intermittent Theta Burst Stimulation (iTBS): An EEG Study.","authors":"Meiliang Liu, Chao Yu, Minjie Tian, Jingping Shi, Yunfang Xu, Zijin Li, Zhengye Si, Xiaoxiao Yang, Xinyue Yang, Junhao Huang, Li Yao, Kuiying Yin, Zhiwen Zhao","doi":"10.1109/JBHI.2025.3551698","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3551698","url":null,"abstract":"<p><p>Intermittent theta burst stimulation (iTBS) is a non-invasive technique frequently employed to induce neural plastic changes and enhance visual attention. Currently, most studies utilized a single iTBS session on healthy subjects to induce short-term neural plastic changes within tens of minutes post-stimulation and investigate its single-session effect on attention performance. Few studies have conducted multiple iTBS sessions on the cerebellum to explore long-term effects on the cerebral cortex and daily effects on visual attention performance. In this study, 18 healthy subjects were involved in a randomized, sham-controlled experiment over one week. All the subjects received daily session of bilateral cerebellar Crus II iTBS or sham stimulation and completed a visual search task. Resting-state electroencephalogram (EEG) was collected 48 hours pre- and post-experiment to assess plastic changes induced by iTBS. The results indicated that the iTBS group exhibited higher accuracy and lower time costs than the sham group after three sessions of iTBS. In addition, iTBS-induced plastic changes persisted up to 48 hours post-experiment, including left-shifted individual alpha frequency, increased intrinsic excitability (the likelihood that a neuron will generate an output in response to a given input), and enhanced PLV functional connectivity (phase synchronization between different brain region). Furthermore, we found that cerebellar iTBS induced a remote effect on the frontal region. Our study revealed the capacity of cerebellar Crus II iTBS to induce plastic changes and enhance attention performance, providing a potential avenue for using iTBS to promote rehabilitation.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143657050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Few-Shot Medical Image Segmentation with High-Confidence Prior Mask.","authors":"Ziming Cheng, Jianqin Zhao, Jingjing Deng, Haofeng Zhang","doi":"10.1109/JBHI.2025.3552428","DOIUrl":"10.1109/JBHI.2025.3552428","url":null,"abstract":"<p><p>Labeling large amounts of medical data is travailing, leading to the blooming of few-shot medical image segmentation, which aims to segment the foreground of a query image given a labeled support set. Almost all current models adopt the cosine distance to measure the similarity between prototypes and query features. However, the limitation of the cosine distance is exacerbated by intra-class differences and inter-class imbalances in medical image scenarios, where angle-only evaluation can induce misclassification to under- and over-segmentation. Motivated by this, we propose a High-Confidence Prior Mask-guided Network (HCPMNet), comprising a High-Confidence Mask Generator (HCPMG), a Target Region Mining (TRM) module, and a Prototype-Oriented Expansion Match (POEM) module. Our HCPMNet offers key advantages: 1) HCPMG is the first to combinatively evaluate angle and magnitude similarity, generating high-confidence priori masks that accurately and completely localize target regions. 2) TRM mines and aggregates target class information under the guidance of priori masks. 3) POEM, based on both similarity metrics, correctly matches prototypes with query features. Extensive experiments on three general medical datasets show that our HCPMNet achieves a new SoTA with great superiority. The code is available at: https://github.com/zmcheng9/HCPMNet.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143657049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junjian Li, Hulin Kuang, Jin Liu, Hailin Yue, Jianxin Wang
{"title":"CA<sup>2</sup>CL: Cluster-Aware Adversarial Contrastive Learning for Pathological Image Analysis.","authors":"Junjian Li, Hulin Kuang, Jin Liu, Hailin Yue, Jianxin Wang","doi":"10.1109/JBHI.2025.3552640","DOIUrl":"10.1109/JBHI.2025.3552640","url":null,"abstract":"<p><p>Pathological diagnosis assists in saving human lives, but such models are annotation hungry and pathological images are notably expensive to annotate. Contrastive learning could be a promising solution that relies only on the unlabeled training data to generate informative representations. However, the majority of current methods in contrastive learning have the following two issues: (1) positive samples produced through random augmentation are less challenging, and (2) false negative pairs problem caused by negative sampling bias. To alleviate the above issues, we propose a novel contrastive learning method called Cluster-Aware Adversarial Contrastive Learning (CA<sup>2</sup>CL). Specifically, a mixed data augmentation technique is provided to learn more transferable representations by generating more discriminative sample pairs. Furthermore, to mitigate the effects of inherent false negative pairs, we adopt a cluster-aware loss to identify similarities between instances and incorporate them into the process of contrastive learning. Finally, we generate challenging contrastive data pairs by adversarial learning, and adversarially learn robust representations in the representation space without the labeled training data, which aims to maximize the similarity between the augmented sample and the related adversarial sample. Our proposed CA<sup>2</sup>CL is evaluated on two public datasets: NCT-CRC-HE and PCam for the fine-tuning and linear evaluation tasks and on two other public datasets: GlaS and CARG for the detection and segmentation tasks, respectively. Extensive experimental results demonstrate the superior performance improvement of our method over several Self-supervised learning (SSL) methods and ImageNet pretraining particularly in scenarios with limited data availability for all four tasks. The code and the pre-trained weights are available at https://github.com/junjianli106/CA2CL.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143657002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Privacy-Preserving Data Augmentation for Digital Pathology Using Improved DCGAN.","authors":"Fengjun Hu, Fan Wu, Dongping Zhang, Hanjie Gu","doi":"10.1109/JBHI.2025.3551720","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3551720","url":null,"abstract":"<p><p>The intelligent analysis of Whole Slide Images (WSI) in digital pathology is critical for advancing precision medicine, particularly in oncology. However, the availability of WSI datasets is often limited by privacy regulations, which constrains the performance and generalizability of deep learning models. To address this challenge, this paper proposes an improved data augmentation method based on Deep Convolutional Generative Adversarial Network (DCGAN). Our approach leverages self-supervised pretraining with the CTransPath model to extract diverse and representationally rich WSI features, which guide the generation of high-quality synthetic images. We further enhance the model by introducing a least-squares adversarial loss and a frequency domain loss to improve pixel-level accuracy and structural fidelity, while incorporating residual blocks and skip connections to increase network depth, mitigate gradient vanishing, and improve training stability. Experimental results on the PatchCamelyon dataset demonstrate that our improved DCGAN achieves superior SSIM and FID scores compared to traditional models. The augmented datasets significantly enhance the performance of downstream classification tasks, improving accuracy, AUC, and F1 scores.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143657052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Konstantinos Georgas, Ioannis A Vezakis, Ioannis Kakkos, Anastasia Natalia Douma, Evangelia Panourgias, Lia A Moulopoulos, George K Matsopoulos
{"title":"Rad-EfficientNet: Improving Breast MRI Diagnosis Through Integration of Radiomics and Deep Learning.","authors":"Konstantinos Georgas, Ioannis A Vezakis, Ioannis Kakkos, Anastasia Natalia Douma, Evangelia Panourgias, Lia A Moulopoulos, George K Matsopoulos","doi":"10.1109/JBHI.2025.3551840","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3551840","url":null,"abstract":"<p><p>Breast cancer stands as the most prevalent cancer in women globally, with its worldwide escalating incidence and mortality rates underscoring the necessity of improving upon current non-invasive diagnostic methodologies for early-stage detection. This study introduces Rad-EfficientNet, a convolutional neural network (CNN) that incorporates radiomic features in its training pipeline to differentiate benign from malignant breast tumors in multiparametric 3 T breast magnetic resonance imaging (MRI). To this end, a dataset of 104 cases, including 45 benign and 59 malignant instances, was collected, and radiomic features were extracted from the 3D bounding boxes of each of the tumors. The Pearson's correlation coefficient and the Variance Inflation Factor were employed to reduce the radiomic features to a subset of 25. Rad-EfficientNet was then trained on both image and radiomics data. Based on the EfficientNet network family, the proposed Rad-EfficientNet architecture builds upon it by introducing a radiomics fusion layer consisting of a feature reduction operation, radiomic feature concatenation with the learned features, and finally a dropout layer. Rad-EfficientNet achieved an accuracy score of 82%, outperforming conventional classifiers trained solely on radiomic features, as well as hybrid models that combine learned and radiomic features post-training. These results indicate that by incorporating radiomics directly into the CNN training pipeline, complementary features are learned, thereby offering a way to improve current diagnostic deep learning techniques for breast lesion diagnosis.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143648344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rashid Ali, Fiaz Gul Khan, Zia Ur Rehman, Daehan Kwak, Farman Ali
{"title":"Enhanced Diabetic Retinopathy Detection: An Explainable Semi-Supervised Approach Using Contrastive Learning.","authors":"Rashid Ali, Fiaz Gul Khan, Zia Ur Rehman, Daehan Kwak, Farman Ali","doi":"10.1109/JBHI.2025.3551696","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3551696","url":null,"abstract":"<p><p>Diabetic retinopathy (DR) is a leading cause of blindness and represents a critical challenge to global vision health. Early detection is essential to preventing irreversible eye damage. Automated medical image analysis plays a pivotal role in enabling timely diagnosis. However, the development of robust diagnostic models is challenged by the scarcity of labeled data and the prevalence of imbalanced and unlabeled datasets. Semi-supervised learning offers a potential solution by leveraging unlabeled data to enhance model performance. However, it is often limited by challenges such as unreliable pseudo-labeling, the exclusion of low-confidence data, and biases introduced by imbalanced datasets. To address these limitations, we propose a novel semi-supervised learning framework for DR detection that combines similarity and contrastive learning. Our approach utilizes class prototypes and an ensemble of classifiers to generate reliable pseudo-labels for unlabeled data. Unlike traditional methods that discard unreliable samples, our framework integrates them into the training process using contrastive learning. This allows us to extract valuable features and improve overall performance. Furthermore, we enhance the model's transparency and interpretability by incorporating the explainable AI technique GradCAM, which provides insights into the model's predictions for specific images. We evaluated the proposed method on the publicly available Kaggle DR dataset for diabetic retinopathy classification. Experimental results demonstrate that our approach achieves improved performance compared to existing semi-supervised learning methods. It also effectively leverages unreliable samples, highlighting its potential to advance DR diagnosis.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143648340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"XRadNet: A Radiomics-Guided Breast Cancer Molecular Subtype Prediction Network with a Radiomics Explanation.","authors":"Yinhao Liang, Wenjie Tang, Jianjun Zhang, Ting Wang, Wing W Y Ng, Siyi Chen, Kuiming Jiang, Xinhua Wei, Xinqing Jiang, Yuan Guo","doi":"10.1109/JBHI.2025.3552072","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3552072","url":null,"abstract":"<p><p>In this work, we propose a radiomics-guided neural network, XRadNet, for breast cancer molecular subtype prediction. XRadNet is a two-head neural network, with one for predicting molecular subtypes and the other for approximating radiomic features. In addition, a training scheme with radiomics guidance is proposed to improve performance. First, we conduct a series of experiments to test the radiomic feature learning capacity of different neural networks, which determines the backbone of XRadNet. Moreover, significant radiomic features are also determined according to radiomics and prior knowledge. XRadNet is subsequently pretrained in a self-supervised manner. The pretraining uses synthetic samples to train the backbone and radiomic feature regression head. This mitigates the impact of an insufficient number of samples. Finally, XRadNet is fine-tuned with a downstream real-world dataset by enabling all heads. Furthermore, a logistic regression is built with radiomic features and learned features, which provides a new way to interpreting the trained model with concepts familiar to radiologists. The experimental results show that XRadNet effectively predicts the four molecular subtypes of breast cancer. These results also demonstrate that the proposed training scheme yields better or competitive performance than those models pretrained on ImageNet or medical datasets.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143648399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Position paper: Extending Credibility Assessment of In Silico Medicine Predictors to Machine Learning Predictors.","authors":"Marco Viceconti, Filippo Lanubile, Antonella Carbonaro, Sabato Mellone, Cristina Curreli, Alessandra Aldieri, Saverio Ranciati, Angela Montanari","doi":"10.1109/JBHI.2025.3552320","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3552320","url":null,"abstract":"<p><p>There are several situations where it would be convenient if a quantity of interest essential to support a medical or regulatory decision could be predicted as a function of other measurable quantities rather than measured experimentally. To do so, we need to ensure that in all practical cases, the predicted value does not differ from what we would measure experimentally by more than an acceptable threshold, defined by the context in which that quantity of interest is used in the decision-making process. This is called Credibility Assessment. Initial work, which guided the elaboration of the first technical standard on the topic (ASME VV-40:2018), focused on predictive models built from available mechanistic knowledge of the phenomenon of interest. For this class of predictive models, sometimes called biophysical models, a credibility assessment practice based on the so-called verification, Validation, Uncertainty, Quantification and Applicability (VVUQA) analysis is accepted. Through theoretical considerations, this position paper aims to summarise a complex debate on whether such an approach can be extended to predictive models built without any mechanistic knowledge (machine learning (ML) predictors). We conclude that the VVUQA can be extended to ML-based predictors; however, since there is no certainty that the features used to predict the quantity of interest are necessary and sufficient, according to the VVUQA framework, such credibility assessment is limited to the test sets used for the validation studies. This calls for a Total Product Life Cycle approach, where periodic retesting of ML-based predictors is part of post-marketing surveillance to ensure that no \"unknown bias\" may play a role.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143648342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}