{"title":"Nonparametric Dynamic Granger Causality based on Multi-Space Spectrum Fusion for Time-varying Directed Brain Network Construction.","authors":"Chanlin Yi, Jiamin Zhang, Zihan Weng, Wanjun Chen, Dezhong Yao, Fali Li, Zehong Cao, Peiyang Li, Peng Xu","doi":"10.1109/JBHI.2024.3477944","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3477944","url":null,"abstract":"<p><p>Nonparametric estimation of time-varying directed networks can unveil the intricate transient organization of directed brain communication while circumventing constraints imposed by prescribed model-driven methods. A robust time-frequency representation - the foundation of its causality inference - is critical for enhancing its reliability. This study proposed a novel method, i.e., nonparametric dynamic Granger causality based on Multi-space Spectrum Fusion (ndGCMSF), which integrates complementary spectrum information from different spaces to generate reliable spectral representations to estimate dynamic causalities across brain regions. Systematic simulations and validations demonstrate that ndGCMSF exhibits superior noise resistance and a powerful ability to capture subtle dynamic changes in directed brain networks. Particularly, ndGCMSF revealed that during instruction response movements, the laterality in the hemisphere ipsilateral to the hemiplegic limb emerges upon instruction onset and diminishes upon task accomplishment. These intrinsic variations further provide reliable features for distinguishing two types of hemiplegia (left vs. right) and assessing motor functions. The ndGCMSF offers powerful functional patterns to derive effective brain networks in dynamically changing operational settings and contributes to extensive areas involving dynamical and directed communications.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142400183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yifei Chen, Zhu Zhu, Shenghao Zhu, Linwei Qiu, Binfeng Zou, Fan Jia, Yunpeng Zhu, Chenyan Zhang, Zhaojie Fang, Feiwei Qin, Jin Fan, Changmiao Wang, Gang Yu, Yu Gao
{"title":"SCKansformer: Fine-Grained Classification of Bone Marrow Cells via Kansformer Backbone and Hierarchical Attention Mechanisms.","authors":"Yifei Chen, Zhu Zhu, Shenghao Zhu, Linwei Qiu, Binfeng Zou, Fan Jia, Yunpeng Zhu, Chenyan Zhang, Zhaojie Fang, Feiwei Qin, Jin Fan, Changmiao Wang, Gang Yu, Yu Gao","doi":"10.1109/JBHI.2024.3471928","DOIUrl":"10.1109/JBHI.2024.3471928","url":null,"abstract":"<p><p>The incidence and mortality rates of malignant tumors, such as acute leukemia, have risen significantly. Clinically, hospitals rely on cytological examination of peripheral blood and bone marrow smears to diagnose malignant tumors, with accurate blood cell counting being crucial. Existing automated methods face challenges such as low feature expression capability, poor interpretability, and redundant feature extraction when processing highdimensional microimage data. We propose a novel finegrained classification model, SCKansformer, for bone marrow blood cells, which addresses these challenges and enhances classification accuracy and efficiency. The model integrates the Kansformer Encoder, SCConv Encoder, and Global-Local Attention Encoder. The Kansformer Encoder replaces the traditional MLP layer with the KAN, improving nonlinear feature representation and interpretability. The SCConv Encoder, with its Spatial and Channel Reconstruction Units, enhances feature representation and reduces redundancy. The Global-Local Attention Encoder combines Multi-head Self-Attention with a Local Part module to capture both global and local features. We validated our model using the Bone Marrow Blood Cell FineGrained Classification Dataset (BMCD-FGCD), comprising over 10,000 samples and nearly 40 classifications, developed with a partner hospital. Comparative experiments on our private dataset, as well as the publicly available PBC and ALL-IDB datasets, demonstrate that SCKansformer outperforms both typical and advanced microcell classification methods across all datasets.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142400194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MiRS-HF: A Novel Deep Learning Predictor for Cancer Classification and miRNA Expression Patterns.","authors":"Jie Ni, Donghui Yan, Shan Lu, Zhuoying Xie, Yun Liu, Xin Zhang","doi":"10.1109/JBHI.2024.3476672","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3476672","url":null,"abstract":"<p><p>Cancer classification and biomarker identification are crucial for guiding personalized treatment. To make effective use of miRNA associations and expression data, we have developed a deep learning model for cancer classification and biomarker identification. To make effective use of miRNA associations and expression data, we have developed a deep learning model for cancer classification and biomarker identification. We propose an approach for cancer classification called MiRNA Selection and Hybrid Fusion (MiRS-HF), which consists of early fusion and intermediate fusion. The early fusion involves applying a Layer Attention Graph Convolutional Network (LAGCN) to a miRNA-disease heterogeneous network, resulting in a miRNA-disease association degree score matrix. The intermediate fusion employs a Graph Convolutional Network (GCN) in the classification tasks, weighting the expression data based on the miRNA-disease association degree score. Furthermore, MiRS-HF can identify the important miRNA biomarkers and their expression patterns. The proposed method demonstrates superior performance in the classification tasks of six cancers compared to other methods. Simultaneously, we incorporated the feature weighting strategy into the comparison algorithm, leading to a significant improvement in the algorithm's results, highlighting the extreme importance of this strategy.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142390172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaoyu Guo, Subing Huang, Borong He, Chuanlin Lan, Jodie J Xie, Kelvin Y S Lau, Tomohiko Takei, Arthur D P Mak, Roy T H Cheung, Kazuhiko Seki, Vincent C K Cheung, Rosa H M Chan
{"title":"Inhibitory Components in Muscle Synergies Factorized by The Rectified Latent Variable Model from Electromyographic Data.","authors":"Xiaoyu Guo, Subing Huang, Borong He, Chuanlin Lan, Jodie J Xie, Kelvin Y S Lau, Tomohiko Takei, Arthur D P Mak, Roy T H Cheung, Kazuhiko Seki, Vincent C K Cheung, Rosa H M Chan","doi":"10.1109/JBHI.2024.3453603","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3453603","url":null,"abstract":"<p><p>Non-negative matrix factorization (NMF), widely used in motor neuroscience for identifying muscle synergies from electromyographical signals (EMGs), extracts non-negative synergies and is yet unable to identify potential negative components (NegCps) in synergies underpinned by inhibitory spinal interneurons. To overcome this constraint, we propose to utilize rectified latent variable model (RLVM) to extract muscle synergies. RLVM uses an autoencoder neural network, and the weight matrix of its neural network could be negative, while latent variables must remain non-negative. If inputs to the model are EMGs, the weight matrix and latent variables represent muscle synergies and their temporal activation coefficients, respectively. We compared performances of NMF and RLVM in identifying muscle synergies in simulated and experimental datasets. Our simulated results showed that RLVM performed better in identifying muscle-synergy subspace and NMF had a good correlation with ground truth. Finally, we applied RLVM to a previously published experimental dataset comprising EMGs from upper-limb muscles and spike recordings of spinal premotor interneurons (PreM-INs) collected from two macaque monkeys during grasping tasks. RLVM and NMF synergies were highly similar, but a few small negative muscle components were observed in RLVM synergies. The muscles with NegCps identified by RLVM exhibited near-zero values in their corresponding synergies identified by NMF. Importantly, NegCps of RLVM synergies showed correspondence with the muscle connectivity of PreM-INs with inhibitory muscle fields, as identified by spike-triggered averaging of EMGs. Our results demonstrate the feasibility of RLVM in extracting potential inhibitory muscle-synergy components from EMGs.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142390171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automated Quantification of HER2 Amplification Levels Using Deep Learning.","authors":"Ching-Wei Wang, Kai-Lin Chu, Ting-Sheng Su, Keng-Wei Liu, Yi-Jia Lin, Tai-Kuang Chao","doi":"10.1109/JBHI.2024.3476554","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3476554","url":null,"abstract":"<p><p>HER2 assessment is necessary for patient selection in anti-HER2 targeted treatment. However, manual assessment of HER2 amplification is time-costly, labor-intensive, highly subjective and error-prone. Challenges in HER2 analysis in fluorescence in situ hybridization (FISH) and dual in situ hybridization (DISH) images include unclear and blurry cell boundaries, large variations in cell shapes and signals, overlapping and clustered cells and sparse label issues with manual annotations only on cells with high confidences, producing subjective assessment scores according to the individual choices on cell selection. To address the above-mentioned issues, we have developed a soft-sampling cascade deep learning model and a signal detection model in quantifying CEN17 and HER2 of cells to assist assessment of HER2 amplification status for patient selection of HER2 targeting therapy to breast cancer. In evaluation with two different kinds of clinical datasets, including a FISH data set and a DISH data set, the proposed method achieves high accuracy, recall and F1-score for both datasets in instance segmentation of HER2 related cells that must contain both CEN17 and HER2 signals. Moreover, the proposed method is demonstrated to significantly outperform seven state of the art recently published deep learning methods, including contour proposal network (CPN), soft label-based FCN (SL-FCN), modified fully convolutional network (M-FCN), bilayer convolutional network (BCNet), SOLOv2, Cascade R-CNN and DeepLabv3+ with three different backbones (p ≤ 0.01). Clinically, anti-HER2 therapy can also be applied to gastric cancer patients. We applied the developed model to assist in HER2 DISH amplification assessment for gastric cancer patients, and it also showed promising predictive results (accuracy 97.67 ±1.46%, precision 96.15 ±5.82%, respectively).</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142390168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiale Dun, Jun Wang, Juncheng Li, Qianhui Yang, Wenlong Hang, Xiaofeng Lu, Shihui Ying, Jun Shi
{"title":"A Trustworthy Curriculum Learning Guided Multi-Target Domain Adaptation Network for Autism Spectrum Disorder Classification.","authors":"Jiale Dun, Jun Wang, Juncheng Li, Qianhui Yang, Wenlong Hang, Xiaofeng Lu, Shihui Ying, Jun Shi","doi":"10.1109/JBHI.2024.3476076","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3476076","url":null,"abstract":"<p><p>Domain adaptation has demonstrated success in classification of multi-center autism spectrum disorder (ASD). However, current domain adaptation methods primarily focus on classifying data in a single target domain with the assistance of one or multiple source domains, lacking the capability to address the clinical scenario of identifying ASD in multiple target domains. In response to this limitation, we propose a Trustworthy Curriculum Learning Guided Multi-Target Domain Adaptation (TCL-MTDA) network for identifying ASD in multiple target domains. To effectively handle varying degrees of data shift in multiple target domains, we propose a trustworthy curriculum learning procedure based on the Dempster-Shafer (D-S) Theory of Evidence. Additionally, a domain-contrastive adaptation method is integrated into the TCL-MTDA process to align data distributions between source and target domains, facilitating the learning of domain-invariant features. The proposed TCL-MTDA method is evaluated on 437 subjects (including 220 ASD patients and 217 NCs) from the Autism Brain Imaging Data Exchange (ABIDE). Experimental results validate the effectiveness of our proposed method in multi-target ASD classification, achieving an average accuracy of 71.46% (95% CI: 68.85% - 74.06%) across four target domains, significantly outperforming most baseline methods (p<0.05).</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142390167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jialu Li, Lei Zhu, Zhaohu Xing, Baoliang Zhao, Ying Hu, Faqin Lv, Qiong Wang
{"title":"Cascaded Inner-Outer Clip Retformer for Ultrasound Video Object Segmentation.","authors":"Jialu Li, Lei Zhu, Zhaohu Xing, Baoliang Zhao, Ying Hu, Faqin Lv, Qiong Wang","doi":"10.1109/JBHI.2024.3464732","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3464732","url":null,"abstract":"<p><p>Computer-aided ultrasound (US) imaging is an important prerequisite for early clinical diagnosis and treatment. Due to the harsh ultrasound (US) image quality and the blurry tumor area, recent memory-based video object segmentation models (VOS) achieve frame-level segmentation by performing intensive similarity matching among the past frames which could inevitably result in computational redundancy. Furthermore, the current attention mechanism utilized in recent models only allocates the same attention level among whole spatial-temporal memory features without making distinctions, which may result in accuracy degradation. In this paper, we first build a larger annotated benchmark dataset for breast lesion segmentation in ultrasound videos, then we propose a lightweight clip-level VOS framework for achieving higher segmentation accuracy while maintaining the speed. The Inner-Outer Clip Retformer is proposed to extract spatialtemporal tumor features in parallel. Specifically, the proposed Outer Clip Retformer extracts the tumor movement feature from past video clips to locate the current clip tumor position, while the Inner Clip Retformer detailedly extracts current tumor features that can produce more accurate segmentation results. Then a Clip Contrastive loss function is further proposed to align the extracted tumor features along both the spatial-temporal dimensions to improve the segmentation accuracy. In addition, the Global Retentive Memory is proposed to maintain the complementary tumor features with lower computing resources which can generate coherent temporal movement features. In this way, our model can significantly improve the spatial-temporal perception ability without increasing a large number of parameters, achieving more accurate segmentation results while maintaining a faster segmentation speed. Finally, we conduct extensive experiments to evaluate our proposed model on several video object segmentation datasets, the results show that our framework outperforms state-of-theart segmentation methods.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142390169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mengmeng Wu, Tiantian Liu, Xin Dai, Chuyang Ye, Jinglong Wu, Shintaro Funahashi, Tianyi Yan
{"title":"HMDA: A Hybrid Model with Multi-scale Deformable Attention for Medical Image Segmentation.","authors":"Mengmeng Wu, Tiantian Liu, Xin Dai, Chuyang Ye, Jinglong Wu, Shintaro Funahashi, Tianyi Yan","doi":"10.1109/JBHI.2024.3469230","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3469230","url":null,"abstract":"<p><p>Transformers have been applied to medical image segmentation tasks owing to their excellent longrange modeling capability, compensating for the failure of Convolutional Neural Networks (CNNs) to extract global features. However, the standardized self-attention modules in Transformers, characterized by a uniform and inflexible pattern of attention distribution, frequently lead to unnecessary computational redundancy with high-dimensional data, consequently impeding the model's capacity for precise concentration on salient image regions. Additionally, achieving effective explicit interaction between the spatially detailed features captured by CNNs and the long-range contextual features provided by Transformers remains challenging. In this architecture, we propose a Hybrid Transformer and CNN architecture with Multi-scale Deformable Attention(HMDA), designed to address the aforementioned issues effectively. Specifically, we introduce a Multi-scale Spatially Adaptive Deformable Attention (MSADA) mechanism, which attends to a small set of key sampling points around a reference within the multi-scale features, to achieve better performance. In addition, we propose the Cross Attention Bridge (CAB) module, which integrates multi-scale transformer and local features through channelwise cross attention enriching feature synthesis. HMDA is validated on multiple datasets, and the results demonstrate the effectiveness of our approach, which achieves competitive results compared to the previous methods.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142390170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TBE-Net: A Deep Network Based on Tree-like Branch Encoder for Medical Image Segmentation.","authors":"Shukai Yang, Xiaoqian Zhang, Youdong He, Yufeng Chen, Ying Zhou","doi":"10.1109/JBHI.2024.3468904","DOIUrl":"10.1109/JBHI.2024.3468904","url":null,"abstract":"<p><p>In recent years, encoder-decoder-based network structures have been widely used in designing medical image segmentation models. However, these methods still face some limitations: 1) The network's feature extraction capability is limited, primarily due to insufficient attention to the encoder, resulting in a failure to extract rich and effective features. 2) Unidirectional stepwise decoding of smaller-sized feature maps restricts segmentation performance. To address the above limitations, we propose an innovative Tree-like Branch Encoder Network (TBE-Net), which adopts a tree-like branch encoder to better perform feature extraction and preserve feature information. Additionally, we introduce the Depth and Width Expansion (D-WE) module to expand the network depth and width at low parameter cost, thereby enhancing network performance. Furthermore, we design a Deep Aggregation Module (DAM) to better aggregate and process encoder features. Subsequently, we directly decode the aggregated features to generate the segmentation map. The experimental results show that, compared to other advanced algorithms, our method, with the lowest parameter cost, achieved improvements in the IoU metric on the TNBC, PH2, CHASE-DB1, STARE, and COVID-19-CT-Seg datasets by 1.6%, 0.46%, 0.81%, 1.96%, and 0.86%, respectively.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142390173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Renjie Lv, Wenwen Chang, Guanghui Yan, Wenchao Nie, Lei Zheng, Bin Guo, Muhammad Tariq Sadiq
{"title":"A novel recognition and classification approach for motor imagery based on spatio-temporal features.","authors":"Renjie Lv, Wenwen Chang, Guanghui Yan, Wenchao Nie, Lei Zheng, Bin Guo, Muhammad Tariq Sadiq","doi":"10.1109/JBHI.2024.3464550","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3464550","url":null,"abstract":"<p><p>Motor imagery, as a paradigm of brainmachine interfaces, holds vast potential in the field of medical rehabilitation. Addressing the challenges posed by the non-stationarity and low signal-to-noise ratio of EEG signals, the effective extraction of features from motor imagery signals for accurate recognition stands as a key focus in motor imagery brain-machine interface technology. This paper proposes a motor imagery EEG signal classification model that combines functional brain networks with graph convolutional networks. First, functional brain networks are constructed using different brain functional connectivity metrics, and graph theory features are calculated to deeply analyze the characteristics of brain networks under different motor tasks. Then, the constructed functional brain networks are combined with graph convolutional networks for the classification and recognition of motor imagery tasks. The analysis based on brain functional connectivity reveals that the functional connectivity strength during the both fists task is significantly higher than that of other motor imagery tasks, and the functional connectivity strength during actual movement is generally superior to that of motor imagery tasks. In experiments conducted on the Physionet public dataset, the proposed model achieved a classification accuracy of 88.39% under multi-subject conditions, significantly outperforming traditional methods. Under single-subject conditions, the model effectively addressed the issue of individual variability, achieving an average classification accuracy of 99.31%. These results indicate that the proposed model not only exhibits excellent performance in the classification of motor imagery tasks but also provides new insights into the functional connectivity characteristics of different motor tasks and their corresponding brain regions.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142390166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}