{"title":"Transformer3:基于 fMRI 的人脑功能表征的纯转换器框架。","authors":"Xiaoxi Tian, Hao Ma, Yun Guan, Le Xu, Jiangcong Liu, Lixia Tian","doi":"10.1109/JBHI.2024.3471186","DOIUrl":null,"url":null,"abstract":"<p><p>Effective representation learning is essential for neuroimage-based individualized predictions. Numerous studies have been performed on fMRI-based individualized predictions, leveraging sample-wise, spatial, and temporal interdependencies hidden in fMRI data. However, these studies failed to fully utilize the effective information hidden in fMRI data, as only one or two types of the interdependencies were analyzed. To effectively extract representations of human brain function through fully leveraging the three types of the interdependencies, we establish a pure transformer-based framework, Transformer3, leveraging transformer's strong ability to capture interdependencies within the input data. Transformer<sup>3</sup> consists mainly of three transformer modules, with the Batch Transformer module used for addressing sample-wise similarities and differences, the Region Transformer module used for handling complex spatial interdependencies among brain regions, and the Time Transformer module used for capturing temporal interdependencies across time points. Experiments on age, IQ, and sex predictions based on two public datasets demonstrate the effectiveness of the proposed Transformer3. As the only hypothesis is that sample-wise, spatial, and temporal interdependencies extensively exist within the input data, the proposed Transformer<sup>3</sup> can be widely used for representation learning based on multivariate time-series. Furthermore, the pure transformer framework makes it quite convenient for understanding the driving factors underlying the predictive models based on Transformer<sup>3</sup>.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7000,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Transformer<sup>3</sup>: A Pure Transformer Framework for fMRI-Based Representations of Human Brain Function.\",\"authors\":\"Xiaoxi Tian, Hao Ma, Yun Guan, Le Xu, Jiangcong Liu, Lixia Tian\",\"doi\":\"10.1109/JBHI.2024.3471186\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Effective representation learning is essential for neuroimage-based individualized predictions. Numerous studies have been performed on fMRI-based individualized predictions, leveraging sample-wise, spatial, and temporal interdependencies hidden in fMRI data. However, these studies failed to fully utilize the effective information hidden in fMRI data, as only one or two types of the interdependencies were analyzed. To effectively extract representations of human brain function through fully leveraging the three types of the interdependencies, we establish a pure transformer-based framework, Transformer3, leveraging transformer's strong ability to capture interdependencies within the input data. Transformer<sup>3</sup> consists mainly of three transformer modules, with the Batch Transformer module used for addressing sample-wise similarities and differences, the Region Transformer module used for handling complex spatial interdependencies among brain regions, and the Time Transformer module used for capturing temporal interdependencies across time points. Experiments on age, IQ, and sex predictions based on two public datasets demonstrate the effectiveness of the proposed Transformer3. As the only hypothesis is that sample-wise, spatial, and temporal interdependencies extensively exist within the input data, the proposed Transformer<sup>3</sup> can be widely used for representation learning based on multivariate time-series. Furthermore, the pure transformer framework makes it quite convenient for understanding the driving factors underlying the predictive models based on Transformer<sup>3</sup>.</p>\",\"PeriodicalId\":13073,\"journal\":{\"name\":\"IEEE Journal of Biomedical and Health Informatics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":6.7000,\"publicationDate\":\"2024-10-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Journal of Biomedical and Health Informatics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1109/JBHI.2024.3471186\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Biomedical and Health Informatics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/JBHI.2024.3471186","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Transformer3: A Pure Transformer Framework for fMRI-Based Representations of Human Brain Function.
Effective representation learning is essential for neuroimage-based individualized predictions. Numerous studies have been performed on fMRI-based individualized predictions, leveraging sample-wise, spatial, and temporal interdependencies hidden in fMRI data. However, these studies failed to fully utilize the effective information hidden in fMRI data, as only one or two types of the interdependencies were analyzed. To effectively extract representations of human brain function through fully leveraging the three types of the interdependencies, we establish a pure transformer-based framework, Transformer3, leveraging transformer's strong ability to capture interdependencies within the input data. Transformer3 consists mainly of three transformer modules, with the Batch Transformer module used for addressing sample-wise similarities and differences, the Region Transformer module used for handling complex spatial interdependencies among brain regions, and the Time Transformer module used for capturing temporal interdependencies across time points. Experiments on age, IQ, and sex predictions based on two public datasets demonstrate the effectiveness of the proposed Transformer3. As the only hypothesis is that sample-wise, spatial, and temporal interdependencies extensively exist within the input data, the proposed Transformer3 can be widely used for representation learning based on multivariate time-series. Furthermore, the pure transformer framework makes it quite convenient for understanding the driving factors underlying the predictive models based on Transformer3.
期刊介绍:
IEEE Journal of Biomedical and Health Informatics publishes original papers presenting recent advances where information and communication technologies intersect with health, healthcare, life sciences, and biomedicine. Topics include acquisition, transmission, storage, retrieval, management, and analysis of biomedical and health information. The journal covers applications of information technologies in healthcare, patient monitoring, preventive care, early disease diagnosis, therapy discovery, and personalized treatment protocols. It explores electronic medical and health records, clinical information systems, decision support systems, medical and biological imaging informatics, wearable systems, body area/sensor networks, and more. Integration-related topics like interoperability, evidence-based medicine, and secure patient data are also addressed.