IEEE Journal of Biomedical and Health Informatics最新文献

筛选
英文 中文
Hypercomplex Graph Neural Network: Towards Deep Intersection of Multi-modal Brain Networks. 超复杂图神经网络:迈向多模态脑网络的深度交叉。
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2024-11-01 DOI: 10.1109/JBHI.2024.3490664
Yanwu Yang, Chenfei Ye, Guoqing Cai, Kunru Song, Jintao Zhang, Yang Xiang, Ting Ma
{"title":"Hypercomplex Graph Neural Network: Towards Deep Intersection of Multi-modal Brain Networks.","authors":"Yanwu Yang, Chenfei Ye, Guoqing Cai, Kunru Song, Jintao Zhang, Yang Xiang, Ting Ma","doi":"10.1109/JBHI.2024.3490664","DOIUrl":"10.1109/JBHI.2024.3490664","url":null,"abstract":"<p><p>The multi-modal neuroimage study has provided insights into understanding the heteromodal relationships between brain network organization and behavioral phenotypes. Integrating data from various modalities facilitates the characterization of the interplay among anatomical, functional, and physiological brain alterations or developments. Graph Neural Networks (GNNs) have recently become popular in analyzing and fusing multi-modal, graph-structured brain networks. However, effectively learning complementary representations from other modalities remains a significant challenge due to the sophisticated and heterogeneous inter-modal dependencies. Furthermore, most existing studies often focus on specific modalities (e.g., only fMRI and DTI), which limits their scalability to other types of brain networks. To overcome these limitations, we propose a HyperComplex Graph Neural Network (HC-GNN) that models multi-modal networks as hypercomplex tensor graphs. In our approach, HC-GNN is conceptualized as a dynamic spatial graph, where the attentively learned inter-modal associations are represented as the adjacency matrix. HC-GNN leverages hypercomplex operations for inter-modal intersections through cross-embedding and cross-aggregation, enriching the deep coupling of multi-modal representations. We conduct a statistical analysis on the saliency maps to associate disease biomarkers. Extensive experiments on three datasets demonstrate the superior classification performance of our method and its strong scalability to various types of modalities. Our work presents a powerful paradigm for the study of multi-modal brain networks.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142562726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EEG Temporal-Spatial Feature Learning for Automated Selection of Stimulus Parameters in Electroconvulsive Therapy. 脑电图时空特征学习用于电休克疗法中刺激参数的自动选择。
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2024-10-31 DOI: 10.1109/JBHI.2024.3489221
Fan Wang, Dan Chen, Shenhong Weng, Tengfei Gao, Yiping Zuo, Yuntao Zheng
{"title":"EEG Temporal-Spatial Feature Learning for Automated Selection of Stimulus Parameters in Electroconvulsive Therapy.","authors":"Fan Wang, Dan Chen, Shenhong Weng, Tengfei Gao, Yiping Zuo, Yuntao Zheng","doi":"10.1109/JBHI.2024.3489221","DOIUrl":"10.1109/JBHI.2024.3489221","url":null,"abstract":"<p><p>The risk of adverse effects in Electroconvulsive Therapy (ECT), such as cognitive impairment, can be high if an excessive stimulus is applied to induce the necessary generalized seizure (GS); Conversely, inadequate stimulus results in failure. Recent efforts to automate this task can facilitate statistical analyses on individual parameters or qualitative predictions. However, this automation still significantly lags behind the requirements in clinical practices. This study addresses this issue by predicting the probability of GS induction under the joint restriction of a patient's EEG (electroencephalogram) and the stimulus parameters, sustained by a two-stage learning model (namely ECTnet): 1) Temporal-Spatial Feature Learning. Channel-wise convolution via multiple convolution kernels first learns the deep features of the EEG, followed by a \"ConvLSTM\" constructing the temporal-spatial features aided with the enforced convolution operations at the LSTM gates; 2) GS Prediction. The probability of seizure induction is predicted based on the EEG features fused with stimulus parameters, through which the optimal parameter setting(s) may be obtained by minimizing the stimulus charge while ensuring the probability above a threshold. Experiments have been conducted on EEG data from 96 subjects with mental disorders to examine the performance and design of ECTnet. These experiments indicate that ECTnet can effectively automate the selection of optimal stimulus parameters: 1) an AUC of 0.746, F1-score of 0.90, a precision of 89% and a recall of 93% in the prediction of seizure induction have been achieved, outperforming the state-of-the-art counterpart, and 2) inclusion of parameter features increases the F1-score by 0.054.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142557724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HepNet: Deep Neural Network for Classification of Early-Stage Hepatic Steatosis Using Microwave Signals HepNet:利用微波信号对早期肝脏脂肪变性进行分类的深度神经网络
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2024-10-31 DOI: 10.1109/JBHI.2024.3489626
Sazid Hasan;Aida Brankovic;Md Abdul Awal;Sasan Ahdi Rezaeieh;Shelley E. Keating;Amin M. Abbosh;Ali Zamani
{"title":"HepNet: Deep Neural Network for Classification of Early-Stage Hepatic Steatosis Using Microwave Signals","authors":"Sazid Hasan;Aida Brankovic;Md Abdul Awal;Sasan Ahdi Rezaeieh;Shelley E. Keating;Amin M. Abbosh;Ali Zamani","doi":"10.1109/JBHI.2024.3489626","DOIUrl":"10.1109/JBHI.2024.3489626","url":null,"abstract":"Hepatic steatosis, a key factor in chronic liver diseases, is difficult to diagnose early. This study introduces a classifier for hepatic steatosis using microwave technology, validated through clinical trials. Our method uses microwave signals and deep learning to improve detection to reliable results. It includes a pipeline with simulation data, a new deep-learning model called HepNet, and transfer learning. The simulation data, created with 3D electromagnetic tools, is used for training and evaluating the model. HepNet uses skip connections in convolutional layers and two fully connected layers for better feature extraction and generalization. Calibration and uncertainty assessments ensure the model's robustness. Our simulation achieved an F1-score of 0.91 and a confidence level of 0.97 for classifications with entropy ≤0.1, outperforming traditional models like LeNet (0.81) and ResNet (0.87). We also use transfer learning to adapt HepNet to clinical data with limited patient samples. Using \u0000<sup>1</sup>\u0000H-MRS as the standard for two microwave liver scanners, HepNet achieved high F1-scores of 0.95 and 0.88 for 94 and 158 patient samples, respectively, showing its clinical potential.","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"29 1","pages":"142-151"},"PeriodicalIF":6.7,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142557725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LF-SynthSeg: Label-Free Brain Tissue-Assisted Tumor Synthesis and Segmentation. LF-SynthSeg:无标记脑组织辅助肿瘤合成与分割。
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2024-10-31 DOI: 10.1109/JBHI.2024.3489721
Pengxiao Xu, Junyan Lyu, Li Lin, Pujin Cheng, Xiaoying Tang
{"title":"LF-SynthSeg: Label-Free Brain Tissue-Assisted Tumor Synthesis and Segmentation.","authors":"Pengxiao Xu, Junyan Lyu, Li Lin, Pujin Cheng, Xiaoying Tang","doi":"10.1109/JBHI.2024.3489721","DOIUrl":"10.1109/JBHI.2024.3489721","url":null,"abstract":"<p><p>Unsupervised brain tumor segmentation is pivotal in realms of disease diagnosis, surgical planning, and treatment response monitoring, with the distinct advantage of obviating the need for labeled data. Traditional methodologies in this domain, however, often fall short in fully capitalizing on the extensive prior knowledge of brain tissue, typically approaching the task merely as an anomaly detection challenge. In our research, we present an innovative strategy that effectively integrates brain tissues' prior knowledge into both the synthesis and segmentation of brain tumor from T2-weighted Magnetic Resonance Imaging scans. Central to our method is the tumor synthesis mechanism, employing randomly generated ellipsoids in conjunction with the intensity profiles of brain tissues. This methodology not only fosters a significant degree of variation in the tumor presentations within the synthesized images but also facilitates the creation of an essentially unlimited pool of abnormal T2-weighted images. These synthetic images closely replicate the characteristics of real tumor-bearing scans. Our training protocol extends beyond mere tumor segmentation; it also encompasses the segmentation of brain tissues, thereby directing the networkâs attention to the boundary relationship between brain tumor and brain tissue, thus improving the robustness of our method. We evaluate our approach across five widely recognized public datasets (BRATS 2019, BRATS 2020, BRATS 2021, PED and SSA), and the results show that our method outperforms state-of-the-art unsupervised tumor segmentation methods by large margins. Moreover, the proposed method achieves more than 92 % of the fully supervised performance on the same testing datasets.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142557726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Forecasting Epidemic Spread with Recurrent Graph Gate Fusion Transformers. 利用递归图门融合变换器预测流行病的传播。
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2024-10-30 DOI: 10.1109/JBHI.2024.3488274
Minkyoung Kim, Jae Heon Kim, Beakcheol Jang
{"title":"Forecasting Epidemic Spread with Recurrent Graph Gate Fusion Transformers.","authors":"Minkyoung Kim, Jae Heon Kim, Beakcheol Jang","doi":"10.1109/JBHI.2024.3488274","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3488274","url":null,"abstract":"<p><p>Predicting the unprecedented, nonlinear nature of COVID-19 presents a significant public health challenge. Recent advances in deep learning, such as Graph Neural Networks, Recurrent Neural Networks (RNNs), and Transformers, have enhanced predictions by modeling regional interactions, managing autoregressive time series, and identifying long-term dependencies. However, prior works often feature shallow integration of these models, leading to simplistic graph embeddings and inadequate analysis across different graph types. Additionally, excessive reliance on historical COVID-19 data limits the potential of utilizing time-lagged data, such as intervention policy information. To address these challenges, we introduce ReGraFT, a novel Sequence-to-Sequence model designed for robust long-term forecasting of COVID-19. ReGraFT integrates Multigraph-Gated Recurrent Units (MGRUs) with adaptive graphs, leveraging data from individual states, including infection rates, policy changes, and interstate travel. First, ReGraFT employs adaptive MGRU cells within an RNN framework to capture inter-regional dependencies, dynamically modeling complex transmission dynamics. Second, the model features a Self-Normalizing Priming layer using SELUs to enhance stability and accuracy across short, medium, and long-term forecasts. Lastly, ReGraFT systematically compares and integrates various graph types derived from fully connected layers, pooling, and attention-based mechanisms to provide a nuanced representation of inter-regional relationships. By incorporating lagged COVID-19 policy data, ReGraFT refines forecasts, demonstrating RMSE reductions of 2.39-35.92% compared to state-of-the-art models. This work provides accurate long-term predictions, aiding in better public health decisions. Our code is available at https://github.com/mfriendly/ReGraFT.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142545158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fine-Grained Temporal Site Monitoring in EGD Streams Via Visual Time-Aware Embedding and Vision-Text Asymmetric Coworking. 通过视觉时间感知嵌入和视觉-文本非对称协同工作实现对 EGD 流的细粒度时态站点监控。
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2024-10-30 DOI: 10.1109/JBHI.2024.3488514
Fang Peng, Hongkuan Shi, Shiquan He, Qiang Hu, Ting Li, Fan Huang, Xinxia Feng, Mei Liu, Jiazhi Liao, Qiang Li, Zhiwei Wang
{"title":"Fine-Grained Temporal Site Monitoring in EGD Streams Via Visual Time-Aware Embedding and Vision-Text Asymmetric Coworking.","authors":"Fang Peng, Hongkuan Shi, Shiquan He, Qiang Hu, Ting Li, Fan Huang, Xinxia Feng, Mei Liu, Jiazhi Liao, Qiang Li, Zhiwei Wang","doi":"10.1109/JBHI.2024.3488514","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3488514","url":null,"abstract":"<p><p>Esophagogastroduodenoscopy (EGD) requires inspecting plentiful upper gastrointestinal (UGI) sites completely for a precise cancer screening. Automated temporal site monitoring for EGD assistance is thus of high demand, yet often fails if directly applying the existing methods of online action detection. The key challenges are two- fold: 1) the global camera motion dominates, invalidating the temporal patterns derived from the object optical flows, and 2) the UGI sites are fine-grained, yielding highly homogenized appearances. In this paper, we propose an EGD-customized model, powered by two novel designs, i.e., Visual Time-aware Embedding plus Vision-text Asymmetric Coworking (VTE+VAC), for real-time accurate fine-grained UGI site monitoring. Concretely, VTE learns visual embeddings by differentiating frames via classification losses, and meanwhile by reordering the sampled time-agnostic frames to be temporally coherent via a ranking loss. Such joint objective encourages VTE to capture the sequential relation without resorting to the inapplicable object optical flows, and thus to provide the time-aware frame- wise embeddings. In the subsequent analysis, VAC uses a temporal sliding window, and extracts vision-text multimodal knowledge from each frame and its corresponding textualized prediction via the learned VTE and a frozen BERT. The text embeddings help provide more representative cues, but also may cause misdirection due to prediction errors. Thus, VAC randomly drops or replaces historical predictions to increase the error tolerance to avoid collapsing onto the last few predictions. Qualitative and quantitative experiments demonstrate that the proposed method achieves superior performance compared to other state-of-the-art methods, with an average F1-score improvement of at least 7.66%.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142545157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Innovative Dual-Decoupling CNN with Layer-wise Temporal-Spatial Attention for Sensor-Based Human Activity Recognition. 基于传感器的人类活动识别:创新的双解耦(Dual-Decoupling)CNN与分层时空关注(Layer-wise Temporal-Spatial Attention)。
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2024-10-30 DOI: 10.1109/JBHI.2024.3488528
Qi Teng, Wei Li, Guangwei Hu, Yuanyuan Shu, Yun Liu
{"title":"Innovative Dual-Decoupling CNN with Layer-wise Temporal-Spatial Attention for Sensor-Based Human Activity Recognition.","authors":"Qi Teng, Wei Li, Guangwei Hu, Yuanyuan Shu, Yun Liu","doi":"10.1109/JBHI.2024.3488528","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3488528","url":null,"abstract":"<p><p>Human Activity Recognition (HAR) is essential for monitoring and analyzing human behavior, particularly in health applications such as fall detection and chronic disease management. Traditional methods, even those incorporating attention mechanisms, often oversimplify the complex temporal and spatial dependencies in sensor data by processing features uniformly, leading to inadequate modeling of high-dimensional interactions. To address these limitations, we propose a novel framework: the Temporal-Spatial Feature Decoupling Unit with Layer-wise Training Convolutional Neural Network (CNN-TSFDU-LW). Our model enhances HAR accuracy by decoupling temporal and spatial dependencies, facilitating more precise feature extraction and reducing computational overhead. The TSFDU mechanism enables parallel processing of temporal and spatial features, thereby enriching the learned representations. Furthermore, layer-wise training with a local error function allows for independent updates of each CNN layer, reducing the number of parameters and improving memory efficiency without compromising performance. Experiments on four benchmark datasets (UCI-HAR, PAMAP2, UNIMIB-SHAR, and USC-HAD) demonstrate accuracy improvements ranging from 0.9% to 4.19% over state-of-the-art methods while simultaneously reducing computational complexity. Specifically, our framework achieves accuracy rates of 97.90% on UCI-HAR, 94.34% on PAMAP2, 78.90% on UNIMIB-SHAR, and 94.71% on USC-HAD, underscoring its effectiveness in complex HAR tasks. In conclusion, the CNN-TSFDU-LW framework represents a significant advancement in sensor-based HAR, delivering both improved accuracy and computational efficiency, with promising potential for enhancing health monitoring applications.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142545168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-omics Graph Knowledge Representation for Pneumonia Prognostic Prediction. 用于肺炎预后预测的多组学图知识表示法
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2024-10-30 DOI: 10.1109/JBHI.2024.3488735
Wenyu Xing, Miao Li, Yiwen Liu, Xin Liu, Yifang Li, Yanping Yang, Jing Bi, Jiangang Chen, Dongni Hou, Yuanlin Song, Dean Ta
{"title":"Multi-omics Graph Knowledge Representation for Pneumonia Prognostic Prediction.","authors":"Wenyu Xing, Miao Li, Yiwen Liu, Xin Liu, Yifang Li, Yanping Yang, Jing Bi, Jiangang Chen, Dongni Hou, Yuanlin Song, Dean Ta","doi":"10.1109/JBHI.2024.3488735","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3488735","url":null,"abstract":"<p><p>Early prognostic prediction is crucial for determining appropriate clinical interventions. Previous single-omics models had limitations, such as high contingency and overlooking complex physical conditions. In this paper, we introduced multi-omics graph knowledge representation to predict in-hospital outcomes for pneumonia patients. This method utilizes CT imaging and three non-imaging omics information, and explores a knowledge graph for modeling multi-omics relations to enhance the overall information representation. For imaging omics, a multichannel pyramidal recursive MLP and Longformer-based 3D deep learning module was developed to extract depth features in lung window, while radiomics features were simultaneously extracted in both lung and mediastinal windows. Non-imaging omics involved the adoption of laboratory, microbial, and clinical indices to complement the patient's physical condition. Following feature screening, the similarity fusion network and graph convolutional network (GCN) were employed to determine omics similarity and provide prognostic prediction. The results of comparative experiments and generalization validation demonstrat that the proposed multi-omics GCN-based prediction model has good robustness and outperformed previous single-type omics, classical machine learning, and previous deep learning methods. Thus, the proposed multi-omics graph knowledge representation model enhances early prognostic prediction performance in pneumonia, facilitating a comprehensive assessment of disease severity and timely intervention for high-risk patients.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142545169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Identification of Facial Tics Using Selfie-Video 利用自拍视频自动识别面部抽搐。
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2024-10-30 DOI: 10.1109/JBHI.2024.3488285
Yocheved Loewenstern;Noa Benaroya-Milshtein;Katya Belelovsky;Izhar Bar-Gad
{"title":"Automatic Identification of Facial Tics Using Selfie-Video","authors":"Yocheved Loewenstern;Noa Benaroya-Milshtein;Katya Belelovsky;Izhar Bar-Gad","doi":"10.1109/JBHI.2024.3488285","DOIUrl":"10.1109/JBHI.2024.3488285","url":null,"abstract":"The intrinsic nature of tic disorders, characterized by symptom variability and fluctuation, poses challenges in clinical evaluations. Currently, tic assessments predominantly rely on subjective questionnaires administered periodically during clinical visits, thus lacking continuous quantitative evaluation. This study aims to establish an automatic objective measure of tic expression in natural behavioral settings. A custom-developed smartphone application was used to record selfie-videos of children and adolescents with tic disorders exhibiting facial motor tics. Facial landmarks were utilized to extract tic-related features from video segments labeled as either “tic” or “non-tic”. These features were then passed through a tandem of custom deep neural networks to learn spatial and temporal properties for tic classification of these segments according to their labels. The model achieved a mean accuracy of 95% when trained on data across all subjects, and consistently exceeded 90% accuracy in leave-one-session-out and leave-one-subject-out cross validation training schemes. This automatic tic identification measure may provide a valuable tool for clinicians in facilitating diagnosis, patient follow-up, and treatment efficacy evaluation. Combining this measure with standard smartphone technology has the potential to revolutionize large-scale clinical studies, thereby expediting the development and testing of novel interventions.","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"29 1","pages":"409-419"},"PeriodicalIF":6.7,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142545155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multimodal Consistency-Based Self-Supervised Contrastive Learning Framework for Automated Sleep Staging in Patients with Disorders of Consciousness. 基于多模态一致性的自监督对比学习框架,用于意识障碍患者的自动睡眠分期。
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2024-10-29 DOI: 10.1109/JBHI.2024.3487657
Jahui Pan, Yangzuyi Yu, Man Li, Wanxin Wei, Shuyu Chen, Heyi Zheng, Yanbin He, Yuanqing Li
{"title":"A Multimodal Consistency-Based Self-Supervised Contrastive Learning Framework for Automated Sleep Staging in Patients with Disorders of Consciousness.","authors":"Jahui Pan, Yangzuyi Yu, Man Li, Wanxin Wei, Shuyu Chen, Heyi Zheng, Yanbin He, Yuanqing Li","doi":"10.1109/JBHI.2024.3487657","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3487657","url":null,"abstract":"<p><p>Sleep is a fundamental human activity, and automated sleep staging holds considerable investigational potential. Despite numerous deep learning methods proposed for sleep staging that exhibit notable performance, several challenges remain unresolved, including inadequate representation and generalization capabilities, limitations in multimodal feature extraction, the scarcity of labeled data, and the restricted practical application for patients with disorder of consciousness (DOC). This paper proposes MultiConsSleepNet, a multimodal consistency-based sleep staging network. This network comprises a unimodal feature extractor and a multimodal consistency feature extractor, aiming to explore universal representations of electroencephalograms (EEGs) and electrooculograms (EOGs) and extract the consistency of intra- and intermodal features. Additionally, self-supervised contrastive learning strategies are designed for unimodal and multimodal consistency learning to address the current situation in clinical practice where it is difficult to obtain high-quality labeled data but has a huge amount of unlabeled data. It can effectively alleviate the model's dependence on labeled data, and improve the model's generalizability for effective migration to DOC patients. Experimental results on three publicly available datasets demonstrate that MultiConsSleepNet achieves state-of-the-art performance in sleep staging with limited labeled data and effectively utilizes unlabeled data, enhancing its practical applicability. Furthermore, the proposed model yields promising results on a self-collected DOC dataset, offering a novel perspective for sleep staging research in patients with DOC.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142545153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信