Jingyu Liu, Honghua Wang, Jialing Wang, Han Wu, Yifei Xu, Tao Zhang, Guangfeng Zhou
{"title":"多层变压器早期诊断婴儿自闭症谱系障碍和语言障碍","authors":"Jingyu Liu, Honghua Wang, Jialing Wang, Han Wu, Yifei Xu, Tao Zhang, Guangfeng Zhou","doi":"10.1111/coin.70131","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Current diagnosis of autism spectrum disorder (ASD) and developmental language/reading disorder (DLD/DD) relies predominantly on subjective behavioral assessments, this underscoring the urgent need for objective biomarkers to enable early intervention. This study proposes a Multi-Tier Transformer (MTT) model for early identification of ASD and DLD/DD using resting-state EEG baseline power values. To address severe class imbalance, we augmented the dataset using the SMOTE method. The MTT architecture integrates a Feature-Embedding layer, a Feature-Attention mechanism that dynamically weights multi-spectral inputs, and a dual-attention encoding block comprising both self-attention and cross-attention to enhance contextual representation learning from limited samples. Transfer learning was further employed to improve robustness by pre-training on augmented data and fine-tuning on original samples. Evaluated on clinical infant EEG data, the proposed MTT achieved an accuracy of 0.91 (95% CI: 0.89–0.93), recall of 0.89, and AUC of 0.97, significantly outperforming the state-of-the-art FT-transformer (<i>p</i> = 0.00091). The results indicate that MTT provides a robust and interpretable deep learning tool for auxiliary diagnosis of neurodevelopmental disorders in infancy.</p>\n </div>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"41 6","pages":""},"PeriodicalIF":1.7000,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MTT: Multi-Tier Transformer for Early Diagnosis of Autism Spectrum Disorder and Language Disorder in Infants\",\"authors\":\"Jingyu Liu, Honghua Wang, Jialing Wang, Han Wu, Yifei Xu, Tao Zhang, Guangfeng Zhou\",\"doi\":\"10.1111/coin.70131\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>Current diagnosis of autism spectrum disorder (ASD) and developmental language/reading disorder (DLD/DD) relies predominantly on subjective behavioral assessments, this underscoring the urgent need for objective biomarkers to enable early intervention. This study proposes a Multi-Tier Transformer (MTT) model for early identification of ASD and DLD/DD using resting-state EEG baseline power values. To address severe class imbalance, we augmented the dataset using the SMOTE method. The MTT architecture integrates a Feature-Embedding layer, a Feature-Attention mechanism that dynamically weights multi-spectral inputs, and a dual-attention encoding block comprising both self-attention and cross-attention to enhance contextual representation learning from limited samples. Transfer learning was further employed to improve robustness by pre-training on augmented data and fine-tuning on original samples. Evaluated on clinical infant EEG data, the proposed MTT achieved an accuracy of 0.91 (95% CI: 0.89–0.93), recall of 0.89, and AUC of 0.97, significantly outperforming the state-of-the-art FT-transformer (<i>p</i> = 0.00091). The results indicate that MTT provides a robust and interpretable deep learning tool for auxiliary diagnosis of neurodevelopmental disorders in infancy.</p>\\n </div>\",\"PeriodicalId\":55228,\"journal\":{\"name\":\"Computational Intelligence\",\"volume\":\"41 6\",\"pages\":\"\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2025-10-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computational Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/coin.70131\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational Intelligence","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/coin.70131","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
MTT: Multi-Tier Transformer for Early Diagnosis of Autism Spectrum Disorder and Language Disorder in Infants
Current diagnosis of autism spectrum disorder (ASD) and developmental language/reading disorder (DLD/DD) relies predominantly on subjective behavioral assessments, this underscoring the urgent need for objective biomarkers to enable early intervention. This study proposes a Multi-Tier Transformer (MTT) model for early identification of ASD and DLD/DD using resting-state EEG baseline power values. To address severe class imbalance, we augmented the dataset using the SMOTE method. The MTT architecture integrates a Feature-Embedding layer, a Feature-Attention mechanism that dynamically weights multi-spectral inputs, and a dual-attention encoding block comprising both self-attention and cross-attention to enhance contextual representation learning from limited samples. Transfer learning was further employed to improve robustness by pre-training on augmented data and fine-tuning on original samples. Evaluated on clinical infant EEG data, the proposed MTT achieved an accuracy of 0.91 (95% CI: 0.89–0.93), recall of 0.89, and AUC of 0.97, significantly outperforming the state-of-the-art FT-transformer (p = 0.00091). The results indicate that MTT provides a robust and interpretable deep learning tool for auxiliary diagnosis of neurodevelopmental disorders in infancy.
期刊介绍:
This leading international journal promotes and stimulates research in the field of artificial intelligence (AI). Covering a wide range of issues - from the tools and languages of AI to its philosophical implications - Computational Intelligence provides a vigorous forum for the publication of both experimental and theoretical research, as well as surveys and impact studies. The journal is designed to meet the needs of a wide range of AI workers in academic and industrial research.