{"title":"Equivariant 3D-Conditional Diffusion Model for De Novo Drug Design.","authors":"Jia Zheng, Hai-Cheng Yi, Zhu-Hong You","doi":"10.1109/JBHI.2024.3491318","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3491318","url":null,"abstract":"<p><p>De novo drug design speeds up drug discovery, mitigating its time and cost burdens with advanced computational methods. Previous work either insufficiently utilized the 3D geometric structure of the target proteins, or generated ligands in an order that was inconsistent with real physics. Here we propose an equivariant 3D-conditional diffusion model, named DiffFBDD, for generating new pharmaceutical compounds based on 3D geometric information of specific target protein pockets. DiffFBDD overcomes the underutilization of geometric information by integrating full atomic information of pockets to backbone atoms using an equivariant graph neural network. Moreover, we develop a diffusion approach to generate drugs by generating ligand fragments for specific protein pockets, which requires fewer computational resources and less generation time (65.98% ∼ 96.10% lower). DiffFBDD offers better performance than state-of-the-art models in generating ligands with strong binding affinity to specific protein pockets, while maintaining high validity, uniqueness, and novelty, with clear potential for exploring the drug-like chemical space. The source code of this study is freely available at https://github.com/haichengyi/DiffFBDD.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142576022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongcheng Han, Zhiqiang Tian, Qinbo Guo, Jue Jiang, Shaoyi Du, Juan Wang
{"title":"HSC-T: B-ultrasound-to-elastography Translation via Hierarchical Structural Consistency Learning for Thyroid Cancer Diagnosis.","authors":"Hongcheng Han, Zhiqiang Tian, Qinbo Guo, Jue Jiang, Shaoyi Du, Juan Wang","doi":"10.1109/JBHI.2024.3491905","DOIUrl":"10.1109/JBHI.2024.3491905","url":null,"abstract":"<p><p>Elastography ultrasound imaging is increasingly important in the diagnosis of thyroid cancer and other diseases, but its reliance on specialized equipment and techniques limits widespread adoption. This paper proposes a novel multimodal ultrasound diagnostic pipeline that expands the application of elastography ultrasound by translating B-ultrasound (BUS) images into elastography images (EUS). Additionally, to address the limitations of existing image-to-image translation methods, which struggle to effectively model inter-sample variations and accurately capture regional-scale structural consistency, we propose a BUS-to-EUS translation method based on hierarchical structural consistency. By incorporating domain-level, sample-level, patch-level, and pixel-level constraints, our approach guides the model in learning a more precise mapping from BUS to EUS, thereby enhancing diagnostic accuracy. Experimental results demonstrate that the proposed method significantly improves the accuracy of BUS-to-EUS translation on the MTUSI dataset and that the generated elastography images enhance nodule diagnostic accuracy compared to solely using BUS images on the STUSI and the BUSI datasets. This advancement highlights the potential for broader application of elastography in clinical practice. The code is available at https://github.com/HongchengHan/HSC-T.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142576023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"UnBias: Unveiling Bias Implications in Deep Learning Models for Healthcare Applications.","authors":"Asmaa AbdulQawy, Elsayed Sallam, Amr Elkholy","doi":"10.1109/JBHI.2024.3484951","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3484951","url":null,"abstract":"<p><p>The rapid integration of deep learningpowered artificial intelligence systems in diverse applications such as healthcare, credit assessment, employment, and criminal justice has raised concerns about their fairness, particularly in how they handle various demographic groups. This study delves into the existing biases and their ethical implications in deep learning models. It introduces an UnBias approach for assessing bias in different deep neural network architectures and detects instances where bias seeps into the learning process, shifting the model's focus away from the main features. This contributes to the advancement of equitable and trustworthy AI applications in diverse social settings, especially in healthcare. A case study on COVID-19 detection is carried out, involving chest X-ray scan datasets from various publicly accessible repositories and five well-represented and underrepresented gender-based models across four deep-learning architectures: ResNet50V2, DenseNet121, InceptionV3, and Xception.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142576025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zanhao Fu, Huaiyu Zhu, Yisheng Zhao, Ruohong Huan, Yi Zhang, Shuohui Chen, Yun Pan
{"title":"GMAEEG: A Self-Supervised Graph Masked Autoencoder for EEG Representation Learning.","authors":"Zanhao Fu, Huaiyu Zhu, Yisheng Zhao, Ruohong Huan, Yi Zhang, Shuohui Chen, Yun Pan","doi":"10.1109/JBHI.2024.3443651","DOIUrl":"10.1109/JBHI.2024.3443651","url":null,"abstract":"<p><p>Annotated electroencephalogram (EEG) data is the prerequisite for artificial intelligence-driven EEG autoanalysis. However, the scarcity of annotated data due to its high-cost and the resulted insufficient training limits the development of EEG autoanalysis. Generative self-supervised learning, represented by masked autoencoder, offers potential but struggles with non-Euclidean structures. To alleviate these challenges, this work proposes a self-supervised graph masked autoencoder for EEG representation learning, named GMAEEG. Concretely, a pretrained model is enriched with temporal and spatial representations through a masked signal reconstruction pretext task. A learnable dynamic adjacency matrix, initialized with prior knowledge, adapts to brain characteristics. Downstream tasks are achieved by finetuning pretrained parameters, with the adjacency matrix transferred based on task functional similarity. Experimental results demonstrate that with emotion recognition as the pretext task, GMAEEG reaches superior performance on various downstream tasks, including emotion, major depressive disorder, Parkinson's disease, and pain recognition. This study is the first to tailor the masked autoencoder specifically for EEG representation learning considering its non-Euclidean characteristics. Further, graph connection analysis based on GMAEEG may provide insights for future clinical studies.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141987837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Blood Pressure Estimation Based on the Timing of the Arterial Reflected Waves by Oscillometric Pulse Wave Decomposition.","authors":"Mahsa Alidadi, Maryam Mohebbi, Mohamad Forouzanfar","doi":"10.1109/JBHI.2024.3455100","DOIUrl":"10.1109/JBHI.2024.3455100","url":null,"abstract":"<p><p>Recent innovations in oscillometric blood pressure (BP) estimation leveraging pulse transit time (PTT) show promise even when pulse amplitude is compromised. However, current PTT methods often require multiple sensors. This study introduces a novel approach for oscillometric BP estimation utilizing only a single sensor. Our method simplifies PTT measurement through decomposing oscillometric pulses into forward and reflected wave components. Notably, we found that the time interval between the forward and first reflected wave is shortest when cuff pressure is at the systolic level, while the interval between the first and second reflected waves is longest when the cuff pressure corresponds to the diastolic level. To validate our method, we extensively analyzed two datasets: the first comprised 150 arm oscillometric recordings from 10 healthy individuals, and the second included 425 wrist oscillometric recordings from 85 healthy individuals. Our analysis revealed minimal standard deviation errors for BP measurements: 7.03 mmHg (systolic) and 5.96 mmHg (diastolic) for arm recordings, and 7.98 mmHg (systolic) and 5.39 mmHg (diastolic) for wrist recordings. Across both datasets, the mean error for both systolic and diastolic BP measurements was merely 0.01 mmHg, underscoring the precision of the method. Comparative analysis against established methodologies further affirmed the superior performance of our approach, which also met the ANSI/AAMI/ISO81060-2 standard. While our dataset primarily comprises recordings from healthy individuals, further validation on diverse datasets encompassing challenging cases such as obesity, arterial stiffness, and atrial fibrillation is imperative to evaluate its applicability across various clinical scenarios.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142139907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yanwu Yang, Chenfei Ye, Guoqing Cai, Kunru Song, Jintao Zhang, Yang Xiang, Ting Ma
{"title":"Hypercomplex Graph Neural Network: Towards Deep Intersection of Multi-modal Brain Networks.","authors":"Yanwu Yang, Chenfei Ye, Guoqing Cai, Kunru Song, Jintao Zhang, Yang Xiang, Ting Ma","doi":"10.1109/JBHI.2024.3490664","DOIUrl":"10.1109/JBHI.2024.3490664","url":null,"abstract":"<p><p>The multi-modal neuroimage study has provided insights into understanding the heteromodal relationships between brain network organization and behavioral phenotypes. Integrating data from various modalities facilitates the characterization of the interplay among anatomical, functional, and physiological brain alterations or developments. Graph Neural Networks (GNNs) have recently become popular in analyzing and fusing multi-modal, graph-structured brain networks. However, effectively learning complementary representations from other modalities remains a significant challenge due to the sophisticated and heterogeneous inter-modal dependencies. Furthermore, most existing studies often focus on specific modalities (e.g., only fMRI and DTI), which limits their scalability to other types of brain networks. To overcome these limitations, we propose a HyperComplex Graph Neural Network (HC-GNN) that models multi-modal networks as hypercomplex tensor graphs. In our approach, HC-GNN is conceptualized as a dynamic spatial graph, where the attentively learned inter-modal associations are represented as the adjacency matrix. HC-GNN leverages hypercomplex operations for inter-modal intersections through cross-embedding and cross-aggregation, enriching the deep coupling of multi-modal representations. We conduct a statistical analysis on the saliency maps to associate disease biomarkers. Extensive experiments on three datasets demonstrate the superior classification performance of our method and its strong scalability to various types of modalities. Our work presents a powerful paradigm for the study of multi-modal brain networks.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142562726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hai Liu, Dequan Xu, Youliang Tian, Changgen Peng, Zhenqiang Wu, Ziyue Wang
{"title":"Wasserstein Generative Adversarial Networks Based Differential Privacy Metaverse Data Sharing.","authors":"Hai Liu, Dequan Xu, Youliang Tian, Changgen Peng, Zhenqiang Wu, Ziyue Wang","doi":"10.1109/JBHI.2023.3287092","DOIUrl":"10.1109/JBHI.2023.3287092","url":null,"abstract":"<p><p>Although differential privacy metaverse data sharing can avoid privacy leakage of sensitive data, randomly perturbing local metaverse data will lead to an imbalance between utility and privacy. Therefore, this work proposed models and algorithms of differential privacy metaverse data sharing using Wasserstein generative adversarial networks (WGAN). Firstly, this study constructed the mathematical model of differential privacy metaverse data sharing by introducing appropriate regularization term related to generated data's discriminant probability into WGAN. Secondly, we established basic model and algorithm for differential privacy metaverse data sharing using WGAN based on the constructed mathematical model, and theoretically analyzed basic algorithm. Thirdly, we established federated model and algorithm for differential privacy metaverse data sharing using WGAN by serialized training based on basic model, and theoretically analyzed federated algorithm. Finally, based on utility and privacy metrics, we conducted a comparative analysis for the basic algorithm of differential privacy metaverse data sharing using WGAN, and experimental results validate theoretical results, which show that algorithms of differential privacy metaverse data sharing using WGAN maintaining equilibrium between privacy and utility.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9643965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FEFA: Frequency Enhanced Multi-Modal MRI Reconstruction With Deep Feature Alignment.","authors":"Xuanmin Chen, Liyan Ma, Shihui Ying, Dinggang Shen, Tieyong Zeng","doi":"10.1109/JBHI.2024.3432139","DOIUrl":"10.1109/JBHI.2024.3432139","url":null,"abstract":"<p><p>Integrating complementary information from multiple magnetic resonance imaging (MRI) modalities is often necessary to make accurate and reliable diagnostic decisions. However, the different acquisition speeds of these modalities mean that obtaining information can be time consuming and require significant effort. Reference-based MRI reconstruction aims to accelerate slower, under-sampled imaging modalities, such as T2-modality, by utilizing redundant information from faster, fully sampled modalities, such as T1-modality. Unfortunately, spatial misalignment between different modalities often negatively impacts the final results. To address this issue, we propose FEFA, which consists of cascading FEFA blocks. The FEFA block first aligns and fuses the two modalities at the feature level. The combined features are then filtered in the frequency domain to enhance the important features while simultaneously suppressing the less essential ones, thereby ensuring accurate reconstruction. Furthermore, we emphasize the advantages of combining the reconstruction results from multiple cascaded blocks, which also contributes to stabilizing the training process. Compared to existing registration-then-reconstruction and cross-attention-based approaches, our method is end-to-end trainable without requiring additional supervision, extensive parameters, or heavy computation. Experiments on the public fastMRI, IXI and in-house datasets demonstrate that our approach is effective across various under-sampling patterns and ratios.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141751579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaoshuang Wang, Ziheng Gao, Meiyan Zhang, Ying Wang, Lin Yang, Jianwen Lin, Tommi Karkkainen, Fengyu Cong
{"title":"Combination of Channel Reordering Strategy and Dual CNN-LSTM for Epileptic Seizure Prediction Using Three iEEG Datasets.","authors":"Xiaoshuang Wang, Ziheng Gao, Meiyan Zhang, Ying Wang, Lin Yang, Jianwen Lin, Tommi Karkkainen, Fengyu Cong","doi":"10.1109/JBHI.2024.3438829","DOIUrl":"10.1109/JBHI.2024.3438829","url":null,"abstract":"<p><strong>Objective: </strong>Intracranial electroencephalogram (iEEG) signals are generally recorded using multiple channels, and channel selection is therefore a significant means in studying iEEG-based seizure prediction. For n channels, [Formula: see text] channel cases can be generated for selection. However, by this means, an increase in n can cause an exponential increase in computational consumption, which may result in a failure of channel selection when n is too large. Hence, it is necessary to explore reasonable channel selection strategies under the premise of controlling computational consumption and ensuring high classification accuracy. Given this, we propose a novel method of channel reordering strategy combined with dual CNN-LSTM for effectively predicting seizures.</p><p><strong>Method: </strong>First, for each patient with n channels, interictal and preictal iEEG samples from each single channel are input into the CNN-LSTM model for classification. Then, the F1-score of each single channel is calculated, and the channels are reordered in descending order according to the size of F1-scores (channel reordering strategy). Next, iEEG signals with an increasing number of channels are successively fed into the CNN-LSTM model for classification again. Finally, according to the classification results from n channel cases, the channel case with the highest classification rate is selected.</p><p><strong>Results: </strong>Our method is evaluated on the three iEEG datasets: the Freiburg, the SWEC-ETHZ and the American Epilepsy Society Seizure Prediction Challenge (AES-SPC). At the event-based level, the sensitivities of 100%, 100% and 90.5%, and the false prediction rates (FPRs) of 0.10/h, 0/h and 0.47/h, are achieved for the three datasets, respectively. Moreover, compared to an unspecific random predictor, our method also shows a better performance for all patients and dogs from the three datasets. At the segment-based level, the sensitivities-specificities-accuracies-AUCs of 88.1%-94.0%-93.5%-0.9101, 99.1%-99.7%-99.6%-0.9935, and 69.2%-79.9%-78.2%-0.7373, are attained for the three datasets, respectively.</p><p><strong>Conclusion: </strong>Our method can effectively predict seizures and address the challenge of an excessive number of channels during channel selection.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141897322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DiffMAR: A Generalized Diffusion Model for Metal Artifact Reduction in CT Images.","authors":"Tianxiao Cai, Xiang Li, Chenglan Zhong, Wei Tang, Jixiang Guo","doi":"10.1109/JBHI.2024.3439729","DOIUrl":"10.1109/JBHI.2024.3439729","url":null,"abstract":"<p><p>X-ray imaging frequently introduces varying degrees of metal artifacts to computed tomography (CT) images when metal implants are present. For the metal artifact reduction (MAR) task, existing end-to-end methods often exhibit limited generalization capabilities. While methods based on multiple iterations often suffer from accumulative error, resulting in lower-quality restoration outcomes. In this work, we innovatively present a generalized diffusion model for Metal Artifact Reduction (DiffMAR). The proposed method utilizes a linear degradation process to simulate the physical phenomenon of metal artifact formation in CT images and directly learn an iterative restoration process from paired CT images in the reverse process. During the reverse process of DiffMAR, a Time-Latent Adjustment (TLA) module is designed to adjust time embedding at the latent level, thereby minimizing the accumulative error during iterative restoration. We also designed a structure information extraction (SIE) module to utilize linear interpolation data in the image domain, guiding the generation of anatomical structures during the iterative restoring. This leads to more accurate and robust shadow-free image generation. Comprehensive analysis, including both synthesized data and clinical evidence, confirms that our proposed method surpasses the current state-of-the-art (SOTA) MAR methods in terms of both image generation quality and generalization.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141901580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}