Beyond averaging: A transformer approach to decoding event related brain potentials

IF 4.7 2区 医学 Q1 NEUROIMAGING
Philipp Zelger , Manuel Arnold , Sonja Rossi , Josef Seebacher , Franz Muigg , Simone Graf , Antonio Rodríguez-Sánchez
{"title":"Beyond averaging: A transformer approach to decoding event related brain potentials","authors":"Philipp Zelger ,&nbsp;Manuel Arnold ,&nbsp;Sonja Rossi ,&nbsp;Josef Seebacher ,&nbsp;Franz Muigg ,&nbsp;Simone Graf ,&nbsp;Antonio Rodríguez-Sánchez","doi":"10.1016/j.neuroimage.2025.121049","DOIUrl":null,"url":null,"abstract":"<div><div>The objective of this study is to assess the potential of a transformer-based deep learning approach applied to event-related brain potentials (ERPs) derived from electroencephalographic (EEG) data. Traditional methods involve averaging the EEG signal of multiple trials to extract valuable neural signals from the high noise content of EEG data. However, this averaging technique may conceal relevant information. Our investigation focuses on determining whether a transformer-based deep learning approach, specifically utilizing attention maps, an essential component of transformer networks, can provide deeper insights into ERP data compared to traditional averaging-based analyses.</div><div>We investigated the data of an experiment on loudness perception. In the study, 29 normal-hearing participants between 18 and 30 years were presented with acoustic stimuli at five different sound levels between 65 and 95 dB and provided their subjective loudness rating, which was categorized as ”too loud” and ”not too loud”. During the sound presentation, EEG signals were recorded.</div><div>A convolutional transformer was trained to categorize the EEG data into the two classes (”not too loud” and ”too loud”). The classifier exhibited exceptional performance, achieving over 86 % accuracy and an Area under the Curve (AUC) of up to 0.95.</div><div>Through the utilization of the trained networks, attention maps were generated. Those attention maps provided insights into the time windows relevant for classification within the EEG data. The attention maps above all showed a focus on the time window around 150 to 200 ms, where the average based analysis did not indicate relevant potentials.</div><div>Employing these attention maps, we were able to gain new perspectives on the ERPs, discovering the attention maps potential as a tool for delving deeper into the analysis of event-related potentials.</div></div>","PeriodicalId":19299,"journal":{"name":"NeuroImage","volume":"308 ","pages":"Article 121049"},"PeriodicalIF":4.7000,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"NeuroImage","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1053811925000515","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"NEUROIMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

The objective of this study is to assess the potential of a transformer-based deep learning approach applied to event-related brain potentials (ERPs) derived from electroencephalographic (EEG) data. Traditional methods involve averaging the EEG signal of multiple trials to extract valuable neural signals from the high noise content of EEG data. However, this averaging technique may conceal relevant information. Our investigation focuses on determining whether a transformer-based deep learning approach, specifically utilizing attention maps, an essential component of transformer networks, can provide deeper insights into ERP data compared to traditional averaging-based analyses.
We investigated the data of an experiment on loudness perception. In the study, 29 normal-hearing participants between 18 and 30 years were presented with acoustic stimuli at five different sound levels between 65 and 95 dB and provided their subjective loudness rating, which was categorized as ”too loud” and ”not too loud”. During the sound presentation, EEG signals were recorded.
A convolutional transformer was trained to categorize the EEG data into the two classes (”not too loud” and ”too loud”). The classifier exhibited exceptional performance, achieving over 86 % accuracy and an Area under the Curve (AUC) of up to 0.95.
Through the utilization of the trained networks, attention maps were generated. Those attention maps provided insights into the time windows relevant for classification within the EEG data. The attention maps above all showed a focus on the time window around 150 to 200 ms, where the average based analysis did not indicate relevant potentials.
Employing these attention maps, we were able to gain new perspectives on the ERPs, discovering the attention maps potential as a tool for delving deeper into the analysis of event-related potentials.
求助全文
约1分钟内获得全文 求助全文
来源期刊
NeuroImage
NeuroImage 医学-核医学
CiteScore
11.30
自引率
10.50%
发文量
809
审稿时长
63 days
期刊介绍: NeuroImage, a Journal of Brain Function provides a vehicle for communicating important advances in acquiring, analyzing, and modelling neuroimaging data and in applying these techniques to the study of structure-function and brain-behavior relationships. Though the emphasis is on the macroscopic level of human brain organization, meso-and microscopic neuroimaging across all species will be considered if informative for understanding the aforementioned relationships.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信