Min Zeng, Jingwei Lu, Yiming Li, Chengqian Lu, Shichao Kan, Fei Guo, Min Li
{"title":"CellCircLoc: Deep Neural Network for Predicting and Explaining Cell Line-Specific CircRNA Subcellular Localization.","authors":"Min Zeng, Jingwei Lu, Yiming Li, Chengqian Lu, Shichao Kan, Fei Guo, Min Li","doi":"10.1109/JBHI.2024.3491732","DOIUrl":"10.1109/JBHI.2024.3491732","url":null,"abstract":"<p><p>The subcellular localization of circular RNAs (circRNAs) is crucial for understanding their functional relevance and regulatory mechanisms. CircRNA subcellular localization exhibits variations across different cell lines, demonstrating the diversity and complexity of circRNA regulation within distinct cellular contexts. However, existing computational methods for predicting circRNA subcellular localization often ignore the importance of cell line specificity and instead train a general model on aggregated data from all cell lines. Considering the diversity and context-dependent behavior of circRNAs across different cell lines, it is imperative to develop cell line-specific models to accurately predict circRNA subcellular localization. In the study, we proposed CellCircLoc, a sequence-based deep learning model for circRNA subcellular localization prediction, which is trained for different cell lines. CellCircLoc utilizes a combination of convolutional neural networks, Transformer blocks, and bidirectional long short-term memory to capture both sequence local features and long-range dependencies within the sequences. In the Transformer blocks, CellCircLoc uses an attentive convolution mechanism to capture the importance of individual nucleotides. Extensive experiments demonstrate the effectiveness of CellCircLoc in accurately predicting circRNA subcellular localization across different cell lines, outperforming other computational models that do not consider cell line specificity. Moreover, the interpretability of CellCircLoc facilitates the discovery of important motifs associated with circRNA subcellular localization. The CellCircLoc web server is available at http://csuligroup.com:8000/cellcircloc. The source code can be obtained from https://github.com/CSUBioGroup/CellCircLoc.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142576021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Equivariant 3D-Conditional Diffusion Model for De Novo Drug Design.","authors":"Jia Zheng, Hai-Cheng Yi, Zhu-Hong You","doi":"10.1109/JBHI.2024.3491318","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3491318","url":null,"abstract":"<p><p>De novo drug design speeds up drug discovery, mitigating its time and cost burdens with advanced computational methods. Previous work either insufficiently utilized the 3D geometric structure of the target proteins, or generated ligands in an order that was inconsistent with real physics. Here we propose an equivariant 3D-conditional diffusion model, named DiffFBDD, for generating new pharmaceutical compounds based on 3D geometric information of specific target protein pockets. DiffFBDD overcomes the underutilization of geometric information by integrating full atomic information of pockets to backbone atoms using an equivariant graph neural network. Moreover, we develop a diffusion approach to generate drugs by generating ligand fragments for specific protein pockets, which requires fewer computational resources and less generation time (65.98% ∼ 96.10% lower). DiffFBDD offers better performance than state-of-the-art models in generating ligands with strong binding affinity to specific protein pockets, while maintaining high validity, uniqueness, and novelty, with clear potential for exploring the drug-like chemical space. The source code of this study is freely available at https://github.com/haichengyi/DiffFBDD.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142576022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongcheng Han, Zhiqiang Tian, Qinbo Guo, Jue Jiang, Shaoyi Du, Juan Wang
{"title":"HSC-T: B-ultrasound-to-elastography Translation via Hierarchical Structural Consistency Learning for Thyroid Cancer Diagnosis.","authors":"Hongcheng Han, Zhiqiang Tian, Qinbo Guo, Jue Jiang, Shaoyi Du, Juan Wang","doi":"10.1109/JBHI.2024.3491905","DOIUrl":"10.1109/JBHI.2024.3491905","url":null,"abstract":"<p><p>Elastography ultrasound imaging is increasingly important in the diagnosis of thyroid cancer and other diseases, but its reliance on specialized equipment and techniques limits widespread adoption. This paper proposes a novel multimodal ultrasound diagnostic pipeline that expands the application of elastography ultrasound by translating B-ultrasound (BUS) images into elastography images (EUS). Additionally, to address the limitations of existing image-to-image translation methods, which struggle to effectively model inter-sample variations and accurately capture regional-scale structural consistency, we propose a BUS-to-EUS translation method based on hierarchical structural consistency. By incorporating domain-level, sample-level, patch-level, and pixel-level constraints, our approach guides the model in learning a more precise mapping from BUS to EUS, thereby enhancing diagnostic accuracy. Experimental results demonstrate that the proposed method significantly improves the accuracy of BUS-to-EUS translation on the MTUSI dataset and that the generated elastography images enhance nodule diagnostic accuracy compared to solely using BUS images on the STUSI and the BUSI datasets. This advancement highlights the potential for broader application of elastography in clinical practice. The code is available at https://github.com/HongchengHan/HSC-T.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142576023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"UnBias: Unveiling Bias Implications in Deep Learning Models for Healthcare Applications.","authors":"Asmaa AbdulQawy, Elsayed Sallam, Amr Elkholy","doi":"10.1109/JBHI.2024.3484951","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3484951","url":null,"abstract":"<p><p>The rapid integration of deep learningpowered artificial intelligence systems in diverse applications such as healthcare, credit assessment, employment, and criminal justice has raised concerns about their fairness, particularly in how they handle various demographic groups. This study delves into the existing biases and their ethical implications in deep learning models. It introduces an UnBias approach for assessing bias in different deep neural network architectures and detects instances where bias seeps into the learning process, shifting the model's focus away from the main features. This contributes to the advancement of equitable and trustworthy AI applications in diverse social settings, especially in healthcare. A case study on COVID-19 detection is carried out, involving chest X-ray scan datasets from various publicly accessible repositories and five well-represented and underrepresented gender-based models across four deep-learning architectures: ResNet50V2, DenseNet121, InceptionV3, and Xception.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142576025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yanwu Yang, Chenfei Ye, Guoqing Cai, Kunru Song, Jintao Zhang, Yang Xiang, Ting Ma
{"title":"Hypercomplex Graph Neural Network: Towards Deep Intersection of Multi-modal Brain Networks.","authors":"Yanwu Yang, Chenfei Ye, Guoqing Cai, Kunru Song, Jintao Zhang, Yang Xiang, Ting Ma","doi":"10.1109/JBHI.2024.3490664","DOIUrl":"10.1109/JBHI.2024.3490664","url":null,"abstract":"<p><p>The multi-modal neuroimage study has provided insights into understanding the heteromodal relationships between brain network organization and behavioral phenotypes. Integrating data from various modalities facilitates the characterization of the interplay among anatomical, functional, and physiological brain alterations or developments. Graph Neural Networks (GNNs) have recently become popular in analyzing and fusing multi-modal, graph-structured brain networks. However, effectively learning complementary representations from other modalities remains a significant challenge due to the sophisticated and heterogeneous inter-modal dependencies. Furthermore, most existing studies often focus on specific modalities (e.g., only fMRI and DTI), which limits their scalability to other types of brain networks. To overcome these limitations, we propose a HyperComplex Graph Neural Network (HC-GNN) that models multi-modal networks as hypercomplex tensor graphs. In our approach, HC-GNN is conceptualized as a dynamic spatial graph, where the attentively learned inter-modal associations are represented as the adjacency matrix. HC-GNN leverages hypercomplex operations for inter-modal intersections through cross-embedding and cross-aggregation, enriching the deep coupling of multi-modal representations. We conduct a statistical analysis on the saliency maps to associate disease biomarkers. Extensive experiments on three datasets demonstrate the superior classification performance of our method and its strong scalability to various types of modalities. Our work presents a powerful paradigm for the study of multi-modal brain networks.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142562726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fan Wang, Dan Chen, Shenhong Weng, Tengfei Gao, Yiping Zuo, Yuntao Zheng
{"title":"EEG Temporal-Spatial Feature Learning for Automated Selection of Stimulus Parameters in Electroconvulsive Therapy.","authors":"Fan Wang, Dan Chen, Shenhong Weng, Tengfei Gao, Yiping Zuo, Yuntao Zheng","doi":"10.1109/JBHI.2024.3489221","DOIUrl":"10.1109/JBHI.2024.3489221","url":null,"abstract":"<p><p>The risk of adverse effects in Electroconvulsive Therapy (ECT), such as cognitive impairment, can be high if an excessive stimulus is applied to induce the necessary generalized seizure (GS); Conversely, inadequate stimulus results in failure. Recent efforts to automate this task can facilitate statistical analyses on individual parameters or qualitative predictions. However, this automation still significantly lags behind the requirements in clinical practices. This study addresses this issue by predicting the probability of GS induction under the joint restriction of a patient's EEG (electroencephalogram) and the stimulus parameters, sustained by a two-stage learning model (namely ECTnet): 1) Temporal-Spatial Feature Learning. Channel-wise convolution via multiple convolution kernels first learns the deep features of the EEG, followed by a \"ConvLSTM\" constructing the temporal-spatial features aided with the enforced convolution operations at the LSTM gates; 2) GS Prediction. The probability of seizure induction is predicted based on the EEG features fused with stimulus parameters, through which the optimal parameter setting(s) may be obtained by minimizing the stimulus charge while ensuring the probability above a threshold. Experiments have been conducted on EEG data from 96 subjects with mental disorders to examine the performance and design of ECTnet. These experiments indicate that ECTnet can effectively automate the selection of optimal stimulus parameters: 1) an AUC of 0.746, F1-score of 0.90, a precision of 89% and a recall of 93% in the prediction of seizure induction have been achieved, outperforming the state-of-the-art counterpart, and 2) inclusion of parameter features increases the F1-score by 0.054.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142557724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sazid Hasan, Aida Brankovic, Md Abdul Awal, Sasan Ahdi Rezaeieh, Shelley E Keating, Amin M Abbosh, Ali Zamani
{"title":"HepNet: Deep Neural Network for Classification of Early-Stage Hepatic Steatosis Using Microwave Signals.","authors":"Sazid Hasan, Aida Brankovic, Md Abdul Awal, Sasan Ahdi Rezaeieh, Shelley E Keating, Amin M Abbosh, Ali Zamani","doi":"10.1109/JBHI.2024.3489626","DOIUrl":"10.1109/JBHI.2024.3489626","url":null,"abstract":"<p><p>Hepatic steatosis, a key factor in chronic liver diseases, is difficult to diagnose early. This study introduces a classifier for hepatic steatosis using microwave technology, validated through clinical trials. Our method uses microwave signals and deep learning to improve detection to reliable results. It includes a pipeline with simulation data, a new deep-learning model called HepNet, and transfer learning. The simulation data, created with 3D electromagnetic tools, is used for training and evaluating the model. HepNet uses skip connections in convolutional layers and two fully connected layers for better feature extraction and generalization. Calibration and uncertainty assessments ensure the model's robustness. Our simulation achieved an F1-score of 0.91 and a confidence level of 0.97 for classifications with entropy ≤0.1, outperforming traditional models like LeNet (0.81) and ResNet (0.87). We also use transfer learning to adapt HepNet to clinical data with limited patient samples. Using 1H-MRS as the standard for two microwave liver scanners, HepNet achieved high F1-scores of 0.95 and 0.88 for 94 and 158 patient samples, respectively, showing its clinical potential.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142557725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pengxiao Xu, Junyan Lyu, Li Lin, Pujin Cheng, Xiaoying Tang
{"title":"LF-SynthSeg: Label-Free Brain Tissue-Assisted Tumor Synthesis and Segmentation.","authors":"Pengxiao Xu, Junyan Lyu, Li Lin, Pujin Cheng, Xiaoying Tang","doi":"10.1109/JBHI.2024.3489721","DOIUrl":"10.1109/JBHI.2024.3489721","url":null,"abstract":"<p><p>Unsupervised brain tumor segmentation is pivotal in realms of disease diagnosis, surgical planning, and treatment response monitoring, with the distinct advantage of obviating the need for labeled data. Traditional methodologies in this domain, however, often fall short in fully capitalizing on the extensive prior knowledge of brain tissue, typically approaching the task merely as an anomaly detection challenge. In our research, we present an innovative strategy that effectively integrates brain tissues' prior knowledge into both the synthesis and segmentation of brain tumor from T2-weighted Magnetic Resonance Imaging scans. Central to our method is the tumor synthesis mechanism, employing randomly generated ellipsoids in conjunction with the intensity profiles of brain tissues. This methodology not only fosters a significant degree of variation in the tumor presentations within the synthesized images but also facilitates the creation of an essentially unlimited pool of abnormal T2-weighted images. These synthetic images closely replicate the characteristics of real tumor-bearing scans. Our training protocol extends beyond mere tumor segmentation; it also encompasses the segmentation of brain tissues, thereby directing the networkâs attention to the boundary relationship between brain tumor and brain tissue, thus improving the robustness of our method. We evaluate our approach across five widely recognized public datasets (BRATS 2019, BRATS 2020, BRATS 2021, PED and SSA), and the results show that our method outperforms state-of-the-art unsupervised tumor segmentation methods by large margins. Moreover, the proposed method achieves more than 92 % of the fully supervised performance on the same testing datasets.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142557726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Forecasting Epidemic Spread with Recurrent Graph Gate Fusion Transformers.","authors":"Minkyoung Kim, Jae Heon Kim, Beakcheol Jang","doi":"10.1109/JBHI.2024.3488274","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3488274","url":null,"abstract":"<p><p>Predicting the unprecedented, nonlinear nature of COVID-19 presents a significant public health challenge. Recent advances in deep learning, such as Graph Neural Networks, Recurrent Neural Networks (RNNs), and Transformers, have enhanced predictions by modeling regional interactions, managing autoregressive time series, and identifying long-term dependencies. However, prior works often feature shallow integration of these models, leading to simplistic graph embeddings and inadequate analysis across different graph types. Additionally, excessive reliance on historical COVID-19 data limits the potential of utilizing time-lagged data, such as intervention policy information. To address these challenges, we introduce ReGraFT, a novel Sequence-to-Sequence model designed for robust long-term forecasting of COVID-19. ReGraFT integrates Multigraph-Gated Recurrent Units (MGRUs) with adaptive graphs, leveraging data from individual states, including infection rates, policy changes, and interstate travel. First, ReGraFT employs adaptive MGRU cells within an RNN framework to capture inter-regional dependencies, dynamically modeling complex transmission dynamics. Second, the model features a Self-Normalizing Priming layer using SELUs to enhance stability and accuracy across short, medium, and long-term forecasts. Lastly, ReGraFT systematically compares and integrates various graph types derived from fully connected layers, pooling, and attention-based mechanisms to provide a nuanced representation of inter-regional relationships. By incorporating lagged COVID-19 policy data, ReGraFT refines forecasts, demonstrating RMSE reductions of 2.39-35.92% compared to state-of-the-art models. This work provides accurate long-term predictions, aiding in better public health decisions. Our code is available at https://github.com/mfriendly/ReGraFT.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142545158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fang Peng, Hongkuan Shi, Shiquan He, Qiang Hu, Ting Li, Fan Huang, Xinxia Feng, Mei Liu, Jiazhi Liao, Qiang Li, Zhiwei Wang
{"title":"Fine-Grained Temporal Site Monitoring in EGD Streams Via Visual Time-Aware Embedding and Vision-Text Asymmetric Coworking.","authors":"Fang Peng, Hongkuan Shi, Shiquan He, Qiang Hu, Ting Li, Fan Huang, Xinxia Feng, Mei Liu, Jiazhi Liao, Qiang Li, Zhiwei Wang","doi":"10.1109/JBHI.2024.3488514","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3488514","url":null,"abstract":"<p><p>Esophagogastroduodenoscopy (EGD) requires inspecting plentiful upper gastrointestinal (UGI) sites completely for a precise cancer screening. Automated temporal site monitoring for EGD assistance is thus of high demand, yet often fails if directly applying the existing methods of online action detection. The key challenges are two- fold: 1) the global camera motion dominates, invalidating the temporal patterns derived from the object optical flows, and 2) the UGI sites are fine-grained, yielding highly homogenized appearances. In this paper, we propose an EGD-customized model, powered by two novel designs, i.e., Visual Time-aware Embedding plus Vision-text Asymmetric Coworking (VTE+VAC), for real-time accurate fine-grained UGI site monitoring. Concretely, VTE learns visual embeddings by differentiating frames via classification losses, and meanwhile by reordering the sampled time-agnostic frames to be temporally coherent via a ranking loss. Such joint objective encourages VTE to capture the sequential relation without resorting to the inapplicable object optical flows, and thus to provide the time-aware frame- wise embeddings. In the subsequent analysis, VAC uses a temporal sliding window, and extracts vision-text multimodal knowledge from each frame and its corresponding textualized prediction via the learned VTE and a frozen BERT. The text embeddings help provide more representative cues, but also may cause misdirection due to prediction errors. Thus, VAC randomly drops or replaces historical predictions to increase the error tolerance to avoid collapsing onto the last few predictions. Qualitative and quantitative experiments demonstrate that the proposed method achieves superior performance compared to other state-of-the-art methods, with an average F1-score improvement of at least 7.66%.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142545157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}