迈向稳健、可重复和临床可操作的脑电图生物标志物:用于发现和样本外验证的大型开放访问脑电图数据库。

IF 1.6 4区 医学 Q3 CLINICAL NEUROLOGY
Hanneke van Dijk, Mark Koppenberg, Martijn Arns
{"title":"迈向稳健、可重复和临床可操作的脑电图生物标志物:用于发现和样本外验证的大型开放访问脑电图数据库。","authors":"Hanneke van Dijk,&nbsp;Mark Koppenberg,&nbsp;Martijn Arns","doi":"10.1177/15500594221120516","DOIUrl":null,"url":null,"abstract":"Recently, one of the most impactful neuroscience findings that forms a fundament of the amyloid hypothesis of Alzheimer was called into question after evidence was presented about image tampering of Western Blots (see for full overview). A 2006 Nature paper which reported that Aβ clumps, also known as plaques, in brain tissue are the primary cause of Azheimer’s symptoms, formed a key piece of evidence of this well-known amyloid hypothesis. The image tampering was not only found in the 2006 study, follow-up investigations uncovered 100’s of doctored images, including at least 70 from the original group, calling much of the evidence behind the amyloid hypothesis into question. This means that research on Alzheimer’s has been misdirected for >16 years, billions of research dollars mis-spent and probably a multitude of that amount spent by the pharmacological industry on development of drugs targeting plaques, for which recently many failed clinical trials were reported. But above all, it led to a lack of progress in improvement of treatments for those patients who need it most. This shocking discovery calls for action and a change in the way we evaluate biomarkers in general, but also within the field of psychophysiology and EEG biomarkers, where many biomarker-studies suffer from small sample sizes as well as a lack of (out-of-sample) validation, rendering them statistically underpowered and the biomarkers unapplicable in practice. As for our own EEG-examples, we now know that various meta-analyses have failed to confirm some of the most wellknown diagnostic EEG biomarkers, such as theta-beta ratio (TBR) in ADHD and frontal alpha asymmetry (FAA) in MDD. This latter meta-analysis indicated earlier conclusions on FAA were driven by underpowered studies, and that only sample sizes of >200 yielded biologically plausible effects. Therefore, large datasets are necessary for meaningful biomarker discovery. Unfortunately, we’ve experienced not being able to replicate our own research ourselves as well. In 2012, we reported on three biomarkers that predicted non-response to rTMS in depression: high fronto-central theta, a low individual alpha peak frequency (iAPF), and a large P300 amplitude at site Pz. Our sample comprised 90 MDD patients, however, in a new sample of 106 patients, replication failed for all three biomarkers. This disappointing experience prompted us to only publish on biomarkers when we conduct out-of-sample validation and replication. The most recent example was Brainmarker-I as a treatment stratification biomarker in ADHD, that included a total of 4249 EEG’s to develop the biomarker, two datasets (N = 472) to investigate its predictive value, and three independent datasets (N = 336) for blinded-out-of-sample validation where predictions of the biomarker are confirmed by an external researcher not involved in the data-analysis. In addition, together with a group of EEG research colleagues, we established the International Consortium on Neuromodulation Discovery of Biomarkers (ICON-DB) that aims to make EEG data from repetitive Transcranial Magnetic Stimulation (rTMS) studies, available for direct replication. This ICON-DB initiative already resulted in a published non-replication and a successful replication of EEG biomarkers. A promising new development in EEG research, also needing large datasets, is the use of artificial intelligence (AI) as an advanced signal processing tool. To successfully employ AI techniques (eg, machine-learning or deeplearning) one should prevent overfitting since this commonly leads to a lack of generalization to unseen data and therefore negates the applicability of the specific AI model. To do this, the total dataset should be sub-divided into trainingand validation-sets (together used to develop a model) and an independent and separately held-out test-sets (to test the generalizability/replicability). Therefore, large amounts of data are imperative. Some studies using large datasets have successfully used AI models, for example to define EEG characteristics to classify sex, neurological EEG pathology, and response to different types of therapy. Unfortunately, the literature is full of EEG-AI studies where no validation test-sets are used, or small samples of N < 50 (or not reported) are used without cross-validation, with “claimed accuracies’ of >90% (for reviews see,) showing this field is also suffering from a reproducibility crisis. Algorithms developed with over-fitted data without validation may not well generalize to the clinic. To aid researchers in development and validation of EEG biomarkers, and development of new (AI) methodologies, we hereby also announce our open access EEG dataset: the Two Decades Brainclinics Research Archive for Insights in Neuroscience (TDBRAIN). TDBRAIN consists of 1274 well phenotyped EEGs from healthy and psychiatric participants Editorial","PeriodicalId":10682,"journal":{"name":"Clinical EEG and Neuroscience","volume":"54 2","pages":"103-105"},"PeriodicalIF":1.6000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Towards Robust, Reproducible, and Clinically Actionable EEG Biomarkers: Large Open Access EEG Database for Discovery and Out-of-sample Validation.\",\"authors\":\"Hanneke van Dijk,&nbsp;Mark Koppenberg,&nbsp;Martijn Arns\",\"doi\":\"10.1177/15500594221120516\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently, one of the most impactful neuroscience findings that forms a fundament of the amyloid hypothesis of Alzheimer was called into question after evidence was presented about image tampering of Western Blots (see for full overview). A 2006 Nature paper which reported that Aβ clumps, also known as plaques, in brain tissue are the primary cause of Azheimer’s symptoms, formed a key piece of evidence of this well-known amyloid hypothesis. The image tampering was not only found in the 2006 study, follow-up investigations uncovered 100’s of doctored images, including at least 70 from the original group, calling much of the evidence behind the amyloid hypothesis into question. This means that research on Alzheimer’s has been misdirected for >16 years, billions of research dollars mis-spent and probably a multitude of that amount spent by the pharmacological industry on development of drugs targeting plaques, for which recently many failed clinical trials were reported. But above all, it led to a lack of progress in improvement of treatments for those patients who need it most. This shocking discovery calls for action and a change in the way we evaluate biomarkers in general, but also within the field of psychophysiology and EEG biomarkers, where many biomarker-studies suffer from small sample sizes as well as a lack of (out-of-sample) validation, rendering them statistically underpowered and the biomarkers unapplicable in practice. As for our own EEG-examples, we now know that various meta-analyses have failed to confirm some of the most wellknown diagnostic EEG biomarkers, such as theta-beta ratio (TBR) in ADHD and frontal alpha asymmetry (FAA) in MDD. This latter meta-analysis indicated earlier conclusions on FAA were driven by underpowered studies, and that only sample sizes of >200 yielded biologically plausible effects. Therefore, large datasets are necessary for meaningful biomarker discovery. Unfortunately, we’ve experienced not being able to replicate our own research ourselves as well. In 2012, we reported on three biomarkers that predicted non-response to rTMS in depression: high fronto-central theta, a low individual alpha peak frequency (iAPF), and a large P300 amplitude at site Pz. Our sample comprised 90 MDD patients, however, in a new sample of 106 patients, replication failed for all three biomarkers. This disappointing experience prompted us to only publish on biomarkers when we conduct out-of-sample validation and replication. The most recent example was Brainmarker-I as a treatment stratification biomarker in ADHD, that included a total of 4249 EEG’s to develop the biomarker, two datasets (N = 472) to investigate its predictive value, and three independent datasets (N = 336) for blinded-out-of-sample validation where predictions of the biomarker are confirmed by an external researcher not involved in the data-analysis. In addition, together with a group of EEG research colleagues, we established the International Consortium on Neuromodulation Discovery of Biomarkers (ICON-DB) that aims to make EEG data from repetitive Transcranial Magnetic Stimulation (rTMS) studies, available for direct replication. This ICON-DB initiative already resulted in a published non-replication and a successful replication of EEG biomarkers. A promising new development in EEG research, also needing large datasets, is the use of artificial intelligence (AI) as an advanced signal processing tool. To successfully employ AI techniques (eg, machine-learning or deeplearning) one should prevent overfitting since this commonly leads to a lack of generalization to unseen data and therefore negates the applicability of the specific AI model. To do this, the total dataset should be sub-divided into trainingand validation-sets (together used to develop a model) and an independent and separately held-out test-sets (to test the generalizability/replicability). Therefore, large amounts of data are imperative. Some studies using large datasets have successfully used AI models, for example to define EEG characteristics to classify sex, neurological EEG pathology, and response to different types of therapy. Unfortunately, the literature is full of EEG-AI studies where no validation test-sets are used, or small samples of N < 50 (or not reported) are used without cross-validation, with “claimed accuracies’ of >90% (for reviews see,) showing this field is also suffering from a reproducibility crisis. Algorithms developed with over-fitted data without validation may not well generalize to the clinic. To aid researchers in development and validation of EEG biomarkers, and development of new (AI) methodologies, we hereby also announce our open access EEG dataset: the Two Decades Brainclinics Research Archive for Insights in Neuroscience (TDBRAIN). TDBRAIN consists of 1274 well phenotyped EEGs from healthy and psychiatric participants Editorial\",\"PeriodicalId\":10682,\"journal\":{\"name\":\"Clinical EEG and Neuroscience\",\"volume\":\"54 2\",\"pages\":\"103-105\"},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2023-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Clinical EEG and Neuroscience\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1177/15500594221120516\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"CLINICAL NEUROLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical EEG and Neuroscience","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/15500594221120516","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"CLINICAL NEUROLOGY","Score":null,"Total":0}
引用次数: 2

摘要

本文章由计算机程序翻译,如有差异,请以英文原文为准。
Towards Robust, Reproducible, and Clinically Actionable EEG Biomarkers: Large Open Access EEG Database for Discovery and Out-of-sample Validation.
Recently, one of the most impactful neuroscience findings that forms a fundament of the amyloid hypothesis of Alzheimer was called into question after evidence was presented about image tampering of Western Blots (see for full overview). A 2006 Nature paper which reported that Aβ clumps, also known as plaques, in brain tissue are the primary cause of Azheimer’s symptoms, formed a key piece of evidence of this well-known amyloid hypothesis. The image tampering was not only found in the 2006 study, follow-up investigations uncovered 100’s of doctored images, including at least 70 from the original group, calling much of the evidence behind the amyloid hypothesis into question. This means that research on Alzheimer’s has been misdirected for >16 years, billions of research dollars mis-spent and probably a multitude of that amount spent by the pharmacological industry on development of drugs targeting plaques, for which recently many failed clinical trials were reported. But above all, it led to a lack of progress in improvement of treatments for those patients who need it most. This shocking discovery calls for action and a change in the way we evaluate biomarkers in general, but also within the field of psychophysiology and EEG biomarkers, where many biomarker-studies suffer from small sample sizes as well as a lack of (out-of-sample) validation, rendering them statistically underpowered and the biomarkers unapplicable in practice. As for our own EEG-examples, we now know that various meta-analyses have failed to confirm some of the most wellknown diagnostic EEG biomarkers, such as theta-beta ratio (TBR) in ADHD and frontal alpha asymmetry (FAA) in MDD. This latter meta-analysis indicated earlier conclusions on FAA were driven by underpowered studies, and that only sample sizes of >200 yielded biologically plausible effects. Therefore, large datasets are necessary for meaningful biomarker discovery. Unfortunately, we’ve experienced not being able to replicate our own research ourselves as well. In 2012, we reported on three biomarkers that predicted non-response to rTMS in depression: high fronto-central theta, a low individual alpha peak frequency (iAPF), and a large P300 amplitude at site Pz. Our sample comprised 90 MDD patients, however, in a new sample of 106 patients, replication failed for all three biomarkers. This disappointing experience prompted us to only publish on biomarkers when we conduct out-of-sample validation and replication. The most recent example was Brainmarker-I as a treatment stratification biomarker in ADHD, that included a total of 4249 EEG’s to develop the biomarker, two datasets (N = 472) to investigate its predictive value, and three independent datasets (N = 336) for blinded-out-of-sample validation where predictions of the biomarker are confirmed by an external researcher not involved in the data-analysis. In addition, together with a group of EEG research colleagues, we established the International Consortium on Neuromodulation Discovery of Biomarkers (ICON-DB) that aims to make EEG data from repetitive Transcranial Magnetic Stimulation (rTMS) studies, available for direct replication. This ICON-DB initiative already resulted in a published non-replication and a successful replication of EEG biomarkers. A promising new development in EEG research, also needing large datasets, is the use of artificial intelligence (AI) as an advanced signal processing tool. To successfully employ AI techniques (eg, machine-learning or deeplearning) one should prevent overfitting since this commonly leads to a lack of generalization to unseen data and therefore negates the applicability of the specific AI model. To do this, the total dataset should be sub-divided into trainingand validation-sets (together used to develop a model) and an independent and separately held-out test-sets (to test the generalizability/replicability). Therefore, large amounts of data are imperative. Some studies using large datasets have successfully used AI models, for example to define EEG characteristics to classify sex, neurological EEG pathology, and response to different types of therapy. Unfortunately, the literature is full of EEG-AI studies where no validation test-sets are used, or small samples of N < 50 (or not reported) are used without cross-validation, with “claimed accuracies’ of >90% (for reviews see,) showing this field is also suffering from a reproducibility crisis. Algorithms developed with over-fitted data without validation may not well generalize to the clinic. To aid researchers in development and validation of EEG biomarkers, and development of new (AI) methodologies, we hereby also announce our open access EEG dataset: the Two Decades Brainclinics Research Archive for Insights in Neuroscience (TDBRAIN). TDBRAIN consists of 1274 well phenotyped EEGs from healthy and psychiatric participants Editorial
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Clinical EEG and Neuroscience
Clinical EEG and Neuroscience 医学-临床神经学
CiteScore
5.20
自引率
5.00%
发文量
66
审稿时长
>12 weeks
期刊介绍: Clinical EEG and Neuroscience conveys clinically relevant research and development in electroencephalography and neuroscience. Original articles on any aspect of clinical neurophysiology or related work in allied fields are invited for publication.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信