Wei Peng, Jiangzhen Lin, Wei Dai, Ning Yu, Jianxin Wang
{"title":"Hierarchical graph representation learning with multi-granularity features for anti-cancer drug response prediction.","authors":"Wei Peng, Jiangzhen Lin, Wei Dai, Ning Yu, Jianxin Wang","doi":"10.1109/JBHI.2024.3492806","DOIUrl":"10.1109/JBHI.2024.3492806","url":null,"abstract":"<p><p>Patients with the same type of cancer often respond differently to identical drug treatments due to unique genomic traits. Accurately predicting a patient's response to drug is crucial in guiding treatment decisions, alleviating patient suffering, and improving cancer prognosis. Current computational methods utilize deep learning models trained on extensive drug screening data to predict anti-cancer drug responses based on features of cell lines and drugs. However, the interaction between cell lines and drugs is a complex biological process involving interactions across various levels, from internal cellular and drug structures to the external interactions among different molecules.To address this complexity, we propose a novel Hierarchical graph representation Learning with Multi-Granularity features (HLMG) algorithm for predicting anti-cancer drug responses. The HLMG algorithm combines features at two granularities: the overall gene expression and pathway substructures of cell lines, and the overall molecular fingerprints and substructures of drugs. Subsequently, it constructs a heterogeneous graph including cell lines, drugs, known cell line-drug responses, and the associations between similar cell lines and similar drugs. Through a graph convolutional network model, the HLMG learns the final cell line and drug representations by aggregating features of their multi-level neighbor in the heterogeneous graph. The multi-level neighbors consist of the node self, directly related drugs/cell lines, and indirectly related similar drugs/cell lines. Finally, a linear correlation coefficient decoder is employed to reconstruct the cell line-drug correlation matrix to predict anti-cancer drug responses. Our model was tested on the Genomics of Drug Sensitivity in Cancer (GDSC) and the Cancer Cell Line Encyclopedia (CCLE) databases. Results indicate that HLMG outperforms other state-of-the-art methods in accurately predicting anti-cancer drug responses.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142590593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Journal of Biomedical and Health Informatics Information for Authors","authors":"","doi":"10.1109/JBHI.2024.3472135","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3472135","url":null,"abstract":"","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"28 11","pages":"C3-C3"},"PeriodicalIF":6.7,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10745965","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142595075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hengyang Liu, Pengcheng Ren, Yang Yuan, Chengyun Song, Fen Luo
{"title":"Uncertainty Global Contrastive Learning Framework for Semi-Supervised Medical Image Segmentation.","authors":"Hengyang Liu, Pengcheng Ren, Yang Yuan, Chengyun Song, Fen Luo","doi":"10.1109/JBHI.2024.3492540","DOIUrl":"10.1109/JBHI.2024.3492540","url":null,"abstract":"<p><p>In semi-supervised medical image segmentation, the issue of fuzzy boundaries for segmented objects arises. With limited labeled data and the interaction of boundaries from different segmented objects, classifying segmentation boundaries becomes challenging. To mitigate this issue, we propose an uncertainty global contrastive learning (UGCL) framework. Specifically, we propose a patch filtering method and a classification entropy filtering method to provide reliable pseudo-labels for unlabelled data, while separating fuzzy boundaries and high-entropy pixel points as unreliable points. Considering that unreliable regions contain rich complementary information, we introduce an uncertainty global contrast learning method to distinguish these challenging unreliable regions, enhancing intra-class compactness and inter-class separability at the global data level. Within our optimization framework, we also integrate consistency regularization techniques and select unreliable points as targets for consistency. As demonstrated, the contrastive learning and consistency regularization applied to uncertain points enable us to glean valuable semantic information from unreliable data, which enhances segmentation accuracy. We evaluate our method on two publicly available medical image datasets and compare it with other state-of-the-art semi-supervised medical image segmentation methods, and a series of experimental results show that our method has achieved substantial improvements.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142590644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Guest Editorial: Metaverse for Healthcare Trends, Challenges, and Solutions","authors":"Weizheng Wang;Zhuotao Lian;Kapal Dev;Shan Jiang","doi":"10.1109/JBHI.2024.3472388","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3472388","url":null,"abstract":"The concept of the metaverse, first introduced in science fiction, is rapidly becoming a technological reality with profound implications for various sectors, including healthcare. By merging virtual reality (VR), augmented reality (AR), artificial intelligence (AI), and advanced communication technologies, the metaverse promises to create immersive, interactive environments that can transform medical practice, education, and patient care [1].","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"28 11","pages":"6296-6297"},"PeriodicalIF":6.7,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10745913","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142595790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marta Lenatti, Alberto Carlevaro, Aziz Guergachi, Karim Keshavjee, Maurizio Mongelli, Alessia Paglialonga
{"title":"Estimation and Conformity Evaluation of Multi-Class Counterfactual Explanations for Chronic Disease Prevention.","authors":"Marta Lenatti, Alberto Carlevaro, Aziz Guergachi, Karim Keshavjee, Maurizio Mongelli, Alessia Paglialonga","doi":"10.1109/JBHI.2024.3492730","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3492730","url":null,"abstract":"<p><p>Recent advances in Artificial Intelligence (AI) in healthcare are driving research into solutions that can provide personalized guidance. For these solutions to be used as clinical decision support tools, the results provided must be interpretable and consistent with medical knowledge. To this end, this study explores the use of explainable AI to characterize the risk of developing cardiovascular disease in patients diagnosed with chronic obstructive pulmonary disease. A dataset of 9613 records from patients diagnosed with chronic obstructive pulmonary disease was classified into three categories of cardiovascular risk (low, moderate, and high), as estimated by the Framingham Risk Score. Counterfactual explanations were generated with two different methods, MUlti Counterfactuals via Halton sampling (MUCH) and Diverse Counterfactual Explanation (DiCE). An error control mechanism is introduced in the preliminary classification phase to reduce classification errors and obtain meaningful and representative explanations. Furthermore, the concept of counterfactual conformity is introduced as a new way to validate single counterfactual explanations in terms of their conformity, based on proximity with respect to the factual observation and plausibility. The results indicate that explanations generated with MUCH are generally more plausible (lower implausibility) and more distinguishable (higher discriminative power) from the original class than those generated with DiCE, whereas DiCE shows better availability, proximity and sparsity. Furthermore, filtering the counterfactual explanations by eliminating the non-conformal ones results in an additional improvement in quality. The results of this study suggest that combining counterfactual explanations generation with conformity evaluation is worth further validation and expert assessment to enable future development of support tools that provide personalized recommendations for reducing individual risk by targeting specific subsets of biomarkers.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142590584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Farooq, Syed Aziz Shah, Dingchang Zheng, Ahmad Taha, Muhammad Imran, Qammer H Abbasi, Hasan Tahir Abbas
{"title":"Contactless Heart Sound detection using Advanced Signal Processing Exploiting Radar Signals.","authors":"Muhammad Farooq, Syed Aziz Shah, Dingchang Zheng, Ahmad Taha, Muhammad Imran, Qammer H Abbasi, Hasan Tahir Abbas","doi":"10.1109/JBHI.2024.3490992","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3490992","url":null,"abstract":"<p><p>Contactless vital signs detection has the potential to advance healthcare by offering precise and convenient patient monitoring. This groundbreaking approach not only streamlines the monitoring process, but also allows continuous, real-time assessment of vital signs, allowing early detection of anomalies and prompt intervention. This paper presents a novel framework for contactless vital signs detection using continuous-wave (CW) radar and advanced signal processing techniques. We achieved unprecedented precision in capturing 1,261 samples for radar based heart sound waveforms compared to the ground truth ECG signal. Further, our heart sounds method yields highly accurate human heart pulse readings, surpassing previous benchmarks with a mean absolute percentage error (MAPE) of 0.0129 and mean absolute error (MAE) below one (0.8712). In addition, we derived heart rates from the heart sound waveforms and compare them with conventional radar-derived heart rates and ground truth ECG signal. Through analysis, we identified regions where conventional radar based methods exhibit limitations. Our approach demonstrates minimal errors and superior accuracy across all heart rate states, which can potentially set new standards for noninvasive vital sign monitoring.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142582721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ADR-DQPU: A Novel ADR Signal Detection Using Deep Reinforcement and Positive-Unlabeled Learning.","authors":"Chun-Kit Chung, Wen-Yang Lin","doi":"10.1109/JBHI.2024.3492005","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3492005","url":null,"abstract":"<p><p>The medical community has grappled with the challenge of analysis and early detection of severe and unknown adverse drug reactions (ADRs) from Spontaneous Reporting Systems (SRSs) like the FDA Adverse Event Reporting System (FAERS), which often lack professional verification and have inherent uncertainties. These limitations have exacerbated the difficulty of training a robust machine-learning model for detecting ADR signals from SRSs. A solution is to use some authoritative knowledge bases of ADRs, such as SIDER and BioSNAP, which contain limited confirmed ADR relationships (positive), resulting in a relatively small training set compared to the substantial amount of unknown data (unlabeled). This paper proposes a novel ADR signal detection method, ADR-DQPU, to alleviate the issues above by integrating deep reinforcement Q-learning and positive-unlabeled learning. Upon validation using FAERS data, our model outperformed six traditional methods, exhibiting an overall accuracy improvement of 26.45%, an average accuracy improvement of 52.15%, a precision enhancement of 1.89%, a recall improvement of 18.57%, and an F1 score improvement of 10.95%. In comparison to two state-of-the-art machine learning methods, our approach demonstrated an overall accuracy improvement of 64.1%, an average accuracy improvement of 28.23%, a slight decrease of 1.91% in precision, a recall improvement of 55.56%, and an F1 score improvement of 45.53%.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142582672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexis Burgon, Yuhang Zhang, Nicholas Petrick, Berkman Sahiner, Kenny H Cha, Ravi K Samala
{"title":"Bias amplification to facilitate the systematic evaluation of bias mitigation methods.","authors":"Alexis Burgon, Yuhang Zhang, Nicholas Petrick, Berkman Sahiner, Kenny H Cha, Ravi K Samala","doi":"10.1109/JBHI.2024.3491946","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3491946","url":null,"abstract":"<p><p>The future of artificial intelligence (AI) safety is expected to include bias mitigation methods from development to application. The complexity and integration of these methods could grow in conjunction with advances in AI and human-AI interactions. Numerous methods are being proposed to mitigate bias, but without a structured way to compare their strengths and weaknesses. In this work, we present two approaches to systematically amplify subgroup performance bias. These approaches allow for the evaluation and comparison of the effectiveness of bias mitigation methods on AI models by varying the degrees of bias, and can be applied to any classification model. We used these approaches to compare four off-the-shelf bias mitigation methods. Both amplification approaches promote the development of learning shortcuts in which the model forms associations between patient attributes and AI output. We demonstrate these approaches in a case study, evaluating bias in the determination of COVID status from chest x-rays. The maximum achieved increase in performance bias, measured as a difference in predicted prevalence, was 72% and 32% for bias between subgroups related to patient sex and race, respectively. These changes in predicted prevalence were not accompanied by substantial changes in the differences in subgroup area under the receiver operating characteristic curves, indicating that the increased bias is due to the formation of learning shortcuts, not a difference in ability to distinguish positive and negative patients between subgroups.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142582679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ziyan Wang, Yi Zhou, Ninghui Xu, Yuqin Zhou, Heran Zhao, Zhiyong Chang, Zhigang Hu, Xiao Han, Yuke Song, Zuojian Zhou, Tianshu Wang, Tao Yang, Kongfa Hu
{"title":"Advanced Camera-Based Scoliosis Screening via Deep Learning Detection and Fusion of Trunk, Limb, and Skeleton Features.","authors":"Ziyan Wang, Yi Zhou, Ninghui Xu, Yuqin Zhou, Heran Zhao, Zhiyong Chang, Zhigang Hu, Xiao Han, Yuke Song, Zuojian Zhou, Tianshu Wang, Tao Yang, Kongfa Hu","doi":"10.1109/JBHI.2024.3491855","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3491855","url":null,"abstract":"<p><p>Scoliosis significantly impacts quality of life, highlighting the need for effective early scoliosis screening (SS) and intervention. However, current SS methods often involve physical contact, undressing, or radiation exposure. This study introduces an innovative, non-invasive SS approach utilizing a monocular RGB camera that eliminates the need for undressing, sensor attachment, and radiation exposure. We introduce a novel approach that employs Parameterized Human 3D Reconstruction (PH3DR) to reconstruct 3D human models, thereby effectively eliminating clothing obstructions, seamlessly integrated with an ISANet segmentation network, which has been enhanced by Multi-Scale Fusion Attention (MSFA) module we proposed for facilitating the segmentation of distinct human trunk and limb features (HTLF), capturing body surface asymmetries related to scoliosis. Additionally, we propose a Swin Transformer-enhanced CMU-Pose to extract human skeleton features (HSF), identifying skeletal asymmetries crucial for SS. Finally, we develop a fusion model that integrates the HTLF and HSF, combining surface morphology and skeletal features to improve the precision of SS. The experiments demonstrated that PH3DR and MSFA significantly improved the segmentation and extraction of HTLF, whereas ST-based CMU-Pose substantially enhanced the extraction of HSF. Our final model achieved a comparable F1 (0.895 ±0.014) to the best-performing baseline model, with only 0.79% of the parameters and 1.64% of the FLOPs, achieving 36 FPS-significantly higher than the best-performing baseline model (10 FPS). Moreover, our model outperformed two spine surgeons, one less experienced and the other moderately experienced. With its patient-friendly, privacy-preserving, and easily deployable solution, this approach is particularly well-suited for early SS and routine monitoring.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142582676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ying Dan, Aiqun Cai, Jiaxin Ma, Yuming Zhong, Seedahmed S Mahmoud, Qiang Fang
{"title":"A Novel Approach for Aphasia Evaluation based on ROI-based Features from Structural Magnetic Resonance Image.","authors":"Ying Dan, Aiqun Cai, Jiaxin Ma, Yuming Zhong, Seedahmed S Mahmoud, Qiang Fang","doi":"10.1109/JBHI.2024.3492072","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3492072","url":null,"abstract":"<p><p>Aphasia, affecting one-third of stroke survivors, impairs language comprehension and speech production, leading to challenges in daily interactions, social isolation, and economic losses. Assessing aphasia is crucial for effective rehabilitation and recovery in patients. However, the conventional behavioral-based evaluation, reliant on speech pathologists, is susceptible to individual variability, resulting in high labor costs, time-consuming processes, and low robustness. To address these limitations, this study introduces a novel evaluation method based on medical image processing and artificial intelligence. Magnetic resonance imaging (MRI) provides exceptional spatial resolution while mitigating the impact of individual variability. The image processing techniques were employed to extract pathological features, specifically region-of-interest (ROI)-based features. Subsequently, the evaluation models were trained using ROI-based features which initially identify the occurrence of aphasia and then categorize the type of aphasia, aiding clinicians in tailoring treatment to various therapeutic approaches and intensities. The evaluation models also predict the severity and generate scores for four types of language function: spontaneous speech, auditory comprehension, naming, and repetition. Both aphasia occurrence detection and aphasia type classification attain impressive accuracy rates of 100.00 ± 0.00% and 85.00 ± 13.23%, respectively. The severity prediction yields the lowest root mean square error (RMSE) of 17.03 ± 2.75, while the assessment of four language functions achieves the best RMSE of 1.27 ± 0.82. Utilising the advantages of a medical imaging-based automation approach, the proposed aphasia evaluation method provides a comprehensive procedure and generates rather accurate results. Hence it could assist the aphasia rehabilitation and substantially reduce clinicians' workload.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142576019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}