Eleni Vasileiou;Sofia B. Dias;Stelios Hadjidimitriou;Vasileios Charisis;Nikolaos Karagkiozidis;Stavros Malakoudis;Patty de Groot;Stelios Andreadis;Vassilis Tsekouras;Georgios Apostolidis;Anastasia Matonaki;Thanos G. Stavropoulos;Leontios J. Hadjileontiadis
{"title":"Novel Digital Biomarkers for Fine Motor Skills Assessment in Psoriatic Arthritis: The DaktylAct Touch-Based Serious Game Approach","authors":"Eleni Vasileiou;Sofia B. Dias;Stelios Hadjidimitriou;Vasileios Charisis;Nikolaos Karagkiozidis;Stavros Malakoudis;Patty de Groot;Stelios Andreadis;Vassilis Tsekouras;Georgios Apostolidis;Anastasia Matonaki;Thanos G. Stavropoulos;Leontios J. Hadjileontiadis","doi":"10.1109/JBHI.2024.3487785","DOIUrl":"10.1109/JBHI.2024.3487785","url":null,"abstract":"Psoriatic Arthritis (PsA) is a chronic, inflammatory disease affecting joints, substantially impacting patients' quality of life, with European guidelines for managing PsA emphasizing the importance of assessing hand function. Here, we present a set of novel digital biomarkers (dBMs) derived from a touchscreen-based serious game approach, DaktylAct, intended as a proxy, gamified, objective assessment of hand impairment, with emphasis on fine motor skills, caused by PsA. This is achieved by its design, where the user controls a cannon to aim at and hit targets using two finger pinch-in/out and wrist rotation gestures. In-game metrics (targets hit and score) and statistical features (mean, standard deviation) of gameplay actions (duration of gestures, applied pressure, and wrist rotation angle) produced during gameplay serve as informative dBMs. DaktylAct was tested on a cohort comprising 16 clinically verified PsA patients and nine healthy controls (HC). Correlation analysis demonstrated a positive correlation between average pinch-in duration and disease activity (DA) and a negative correlation between standard deviation of applied pressure during wrist rotation and joint inflammation. Logistic regression models achieved 83% and 91% classification performance discriminating HC from PsA patients with low DA (LDA) and PsA patients with and without joint inflammation, respectively. Results presented here are promising and create a proof-of-concept, paving the way for further validation in larger cohorts.","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"29 1","pages":"128-141"},"PeriodicalIF":6.7,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142545170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fangrong Zong, Zaimin Zhu, Jiayi Zhang, Xiaofeng Deng, Zhuangzhuang Li, Chuyang Ye, Yong Liu
{"title":"Attention-based q-space Deep Learning Generalized for Accelerated Diffusion Magnetic Resonance Imaging.","authors":"Fangrong Zong, Zaimin Zhu, Jiayi Zhang, Xiaofeng Deng, Zhuangzhuang Li, Chuyang Ye, Yong Liu","doi":"10.1109/JBHI.2024.3487755","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3487755","url":null,"abstract":"<p><p>Diffusion magnetic resonance imaging (dMRI) is a non-invasive method for capturing the microanatomical information of tissues by measuring the diffusion weighted signals along multiple directions, which is widely used in the quantification of microstructures. Obtaining microscopic parameters requires dense sampling in the q space, leading to significant time consumption. The most popular approach to accelerating dMRI acquisition is to undersample the q-space data, along with applying deep learning methods to reconstruct quantitative diffusion parameters. However, the reliance on a predetermined q-space sampling strategy often constrains traditional deep learning-based reconstructions. The present study proposed a novel deep learning model, named attention-based q-space deep learning (aqDL), to implement the reconstruction with variable q-space sampling strategies. The aqDL maps dMRI data from different scanning strategies onto a common feature space by using a series of Transformer encoders. The latent features are employed to reconstruct dMRI parameters via a multilayer perceptron. The performance of the aqDL model was assessed utilizing the Human Connectome Project datasets at varying undersampling numbers. To validate its generalizability, the model was further tested on two additional independent datasets. Our results showed that aqDL consistently achieves the highest reconstruction accuracy at various undersampling numbers, regardless of whether variable or predetermined q-space scanning strategies are employed. These findings suggest that aqDL has the potential to be used on general clinical dMRI datasets.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142545154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Avatar-Based Picture Exchange Communication System Enhancing Joint Attention Training for Children With Autism.","authors":"Yongjun Ren, Runze Liu, Huinan Sang, Xiaofeng Yu","doi":"10.1109/JBHI.2024.3487589","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3487589","url":null,"abstract":"<p><p>Children with Autism Spectrum Disorder (ASD) often struggle with social communication and feel anxious in interactive situations. The Picture Exchange Communication System (PECS) is commonly used to enhance basic communication skills in children with ASD, but it falls short in reducing social anxiety during therapist interactions and in keeping children engaged. This paper proposes the use of virtual character technology alongside PECS training to address these issues. By integrating a virtual avatar, children's communication skills and ability to express needs can be gradually improved. This approach also reduces anxiety and enhances the interactivity and attractiveness of the training. After conducting a T-test, it was found that PECS assisted by a virtual avatar significantly improves children's focus on activities and enhances their behavioral responsiveness. To address the problem of poor accuracy of gaze estimation in unconstrained environments, this study further developed a visual feature-based gaze estimation algorithm, the three-channel gaze network (TCG-Net). It utilizes binocular images to refine the gaze direction and infer the primary focus from facial images. Our focus was on enhancing gaze tracking accuracy in natural environments, crucial for evaluating and improving Joint Attention (JA) in children during interactive processes.TCG-Net achieved an angular error of 4.0 on the MPIIGaze dataset, 5.0 on the EyeDiap dataset, and 6.8 on the RT-Gene dataset, confirming the effectiveness of our approach in improving gaze accuracy and the quality of social interactions.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142545156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Nuclei-Focused Strategy for Automated Histopathology Grading of Renal Cell Carcinoma.","authors":"Hyunjun Cho, Dongjin Shin, Kwang-Hyun Uhm, Sung-Jea Ko, Yosep Chong, Seung-Won Jung","doi":"10.1109/JBHI.2024.3487004","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3487004","url":null,"abstract":"<p><p>The rising incidence of kidney cancer underscores the need for precise and reproducible diagnostic methods. In particular, renal cell carcinoma (RCC), the most prevalent type of kidney cancer, requires accurate nuclear grading for better prognostic prediction. Recent advances in deep learning have facilitated end-to-end diagnostic methods using contextual features in histopathological images. However, most existing methods focus only on image-level features or lack an effective process for aggregating nuclei prediction results, limiting their diagnostic accuracy. In this paper, we introduce a novel framework, Nuclei feature Assisted Patch-level RCC grading (NuAP-RCC), that leverages nuclei-level features for enhanced patch-level RCC grading. Our approach employs a nuclei-level RCC grading network to extract grade-aware features, which serve as node features in a graph. These node features are aggregated using graph neural networks to capture the morphological characteristics and distributions of the nuclei. The aggregated features are then combined with global image-level features extracted by convolutional neural networks, resulting in a final feature for accurate RCC grading. In addition, we present a new dataset for patch-level RCC grading. Experimental results demonstrate the superior accuracy and generalizability of NuAP-RCC across datasets from different medical institutions, achieving a 6.15% improvement in accuracy over the second-best model on the USM-RCC dataset.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142521773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An On-Board Executable Multi-Feature Transfer-Enhanced Fusion Model for Three-Lead EEG Sensor-Assisted Depression Diagnosis","authors":"Fuze Tian;Haojie Zhang;Yang Tan;Lixian Zhu;Lin Shen;Kun Qian;Bin Hu;Björn W. Schuller;Yoshiharu Yamamoto","doi":"10.1109/JBHI.2024.3487012","DOIUrl":"10.1109/JBHI.2024.3487012","url":null,"abstract":"The development of affective computing and medical electronic technologies has led to the emergence of Artificial Intelligence (AI)-based methods for the early detection of depression. However, previous studies have often overlooked the necessity for the AI-assisted diagnosis system to be wearable and accessible in practical scenarios for depression recognition. In this work, we present an on-board executable multi-feature transfer-enhanced fusion model for our custom-designed wearable three-lead Electroencephalogram (EEG) sensor, based on EEG data collected from 73 depressed patients and 108 healthy controls. Experimental results show that the proposed model exhibits low-computational complexity (65.0 K parameters), promising Floating-Point Operations (FLOPs) performance (25.6 M), real-time processing (1.5 s/execution), and low power consumption (320.8 mW). Furthermore, it requires only 202.0 KB of Random Access Memory (RAM) and 279.6 KB of Read-Only Memory (ROM) when deployed on the EEG sensor. Despite its low computational and spatial complexity, the model achieves a notable classification accuracy of 95.2%, specificity of 94.0%, and sensitivity of 96.9% under independent test conditions. These results underscore the potential of deploying the model on the wearable three-lead EEG sensor for assisting in the diagnosis of depression.","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"29 1","pages":"152-165"},"PeriodicalIF":6.7,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142521774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Attention Transfer in Heterogeneous Networks Fusion for Drug Repositioning.","authors":"Xinguo Lu, Fengxu Sun, Jinxin Li, Jingjing Ruan","doi":"10.1109/JBHI.2024.3486730","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3486730","url":null,"abstract":"<p><p>Computational drug repositioning which accelerates the process of drug development is able to reduce the cost in terms of time and money dramatically which brings promising and broad perspectives for the treatment of complex diseases. Heterogeneous networks fusion has been proposed to improve the performance of drug repositioning. Due to the difference and the specificity including the network structure and the biological function among different biological networks, it poses serious challenge on how to represent drug features and construct drug-disease associations in drug repositioning. Therefore, we proposed a novel drug repositioning method (ATDR) that employed attention transfer across different networks constructed by the deeply represented features integrated from biological networks to implement the disease-drug association prediction. Specifically, we first implemented the drug feature characterization with the graph representation of random surfing for different biological networks, respectively. Then, the drug network of deep feature representation was constructed with the aggregated drug informative features acquired by the multi-modal deep autoencoder on heterogeneous networks. Subsequently, we accomplished the drug-disease association prediction by transferring attention from the drug network to the drug-disease interaction network. We performed comprehensive experiments on different datasets and the results illustrated the outperformance of ATDR compared with other baseline methods and the predicted potential drug-disease interactions could aid in the drug development for disease treatments.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142521775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Panipat Wattanasiri, Samuel Wilson, Weiguang Huo, Ravi Vaidyanathan
{"title":"Gesture Recognition through Mechanomyogram Signals: An Adaptive Framework for Arm Posture Variability.","authors":"Panipat Wattanasiri, Samuel Wilson, Weiguang Huo, Ravi Vaidyanathan","doi":"10.1109/JBHI.2024.3483428","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3483428","url":null,"abstract":"<p><p>In hand gesture recognition, classifying gestures across multiple arm postures is challenging due to the dynamic nature of muscle fibers and the need to capture muscle activity through electrical connections with the skin. This paper presents a gesture recognition architecture addressing the arm posture challenges using an unsupervised domain adaptation technique and a wearable mechanomyogram (MMG) device that does not require electrical contact with the skin. To deal with the transient characteristics of muscle activities caused by changing arm posture, Continuous Wavelet Transform (CWT) combined with Domain-Adversarial Convolutional Neural Networks (DACNN) were used to extract MMG features and classify hand gestures. DACNN was compared with supervised trained classifiers and shown to achieve consistent improvement in classification accuracies over multiple arm postures. With less than 5 minutes of setup time to record 20 examples per gesture in each arm posture, the developed method achieved an average prediction accuracy of 87.43% for classifying 5 hand gestures in the same arm posture and 64.29% across 10 different arm postures. When further expanding the MMG segmentation window from 200 ms to 600 ms to extract greater discriminatory information at the expense of longer response time, the intraposture and inter-posture accuracies increased to 92.32% and 71.75%. The findings demonstrate the capability of the proposed method to improve generalization throughout dynamic changes caused by arm postures during non-laboratory usages and the potential of MMG to be an alternative sensor with comparable performance to the widely used electromyogram (EMG) gesture recognition systems.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142521776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Head-Mounted Displays in Context-Aware Systems for Open Surgery: A State-of-the-Art Review.","authors":"Mingxiao Tu, Hoijoon Jung, Jinman Kim, Andre Kyme","doi":"10.1109/JBHI.2024.3485023","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3485023","url":null,"abstract":"<p><p>Surgical context-aware systems (SCAS), which leverage real-time data and analysis from the operating room to inform surgical activities, can be enhanced through the integration of head-mounted displays (HMDs). Rather than user-agnostic data derived from conventional, and often static, external sensors, HMD-based SCAS relies on dynamic user-centric sensing of the surgical context. The analyzed context-aware information is then augmented directly into a user's field of view via augmented reality (AR) to directly improve their task and decision-making capability. This stateof-the-art review complements previous reviews by exploring the advancement of HMD-based SCAS, including their development and impact on enhancing situational awareness and surgical outcomes in the operating room. The survey demonstrates that this technology can mitigate risks associated with gaps in surgical expertise, increase procedural efficiency, and improve patient outcomes. We also highlight key limitations still to be addressed by the research community, including improving prediction accuracy, robustly handling data heterogeneity, and reducing system latency.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142521777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"scSwinTNet: A Cell Type Annotation Method for Large-Scale Single-Cell RNA-Seq Data Based on Shifted Window Attention.","authors":"Huanhuan Dai, Xiangyu Meng, Zhiyi Pan, Qing Yang, Haonan Song, Yuan Gao, Xun Wang","doi":"10.1109/JBHI.2024.3487174","DOIUrl":"10.1109/JBHI.2024.3487174","url":null,"abstract":"<p><p>The annotation of cell types based on single-cell RNA sequencing (scRNA-seq) data is a critical downstream task in single-cell analysis, with significant implications for a deeper understanding of biological processes. Most analytical methods cluster cells by unsupervised clustering, which requires manual annotation for cell type determination. This procedure is time-overwhelming and non-repeatable. To accommodate the exponential growth of sequencing cells, reduce the impact of data bias, and integrate large-scale datasets for further improvement of type annotation accuracy, we proposed scSwinTNet. It is a pre-trained tool for annotating cell types in scRNA-seq data, which uses self-attention based on shifted windows and enables intelligent information extraction from gene data. We demonstrated the effectiveness and robustness of scSwinTNet by using 399 760 cells from human and mouse tissues. To the best of our knowledge, scSwinTNet is the first model to annotate cell types in scRNA-seq data using a pre-trained shifted window attention-based model. It does not require a priori knowledge and accurately annotates cell types without manual annotation.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142521789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Long-Hao Yang, Fei-Fei Ye, Chris Nugent, Jun Liu, Ying-Ming Wang
{"title":"Belief-Rule-Based System with Self-organizing and Multi-temporal Modeling for Sensor-based Human Activity Recognition.","authors":"Long-Hao Yang, Fei-Fei Ye, Chris Nugent, Jun Liu, Ying-Ming Wang","doi":"10.1109/JBHI.2024.3485871","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3485871","url":null,"abstract":"<p><p>Smart environment is an efficient and cost- effective way to afford intelligent supports for the elderly people. Human activity recognition (HAR) is a crucial aspect of the research field of smart environments, and it has attracted widespread attention lately. The goal of this study is to develop an effective sensor-based HAR model based on the belief-rule-based system (BRBS), which is one of representative rule-based expert systems. Specially, a new belief rule base (BRB) modeling approach is proposed by taking into account the self- organizing rule generation method and the multi-temporal rule representation scheme, in order to address the problem of combination explosion that existed in the traditional BRB modelling procedure and the time correlation found in continuous sensor data in chronological order. The new BRB modeling approach is so called self-organizing and multi-temporal BRB (SOMT-BRB) modeling procedure. A case study is further deducted to validate the effectiveness of the SOMT-BRB modeling procedure. By comparing with some conventional BRBSs and classical activity recognition models, the results show a significant improvement of the BRBS in terms of the number of belief rules, modelling efficiency, and activity recognition accuracy.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142499409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}