{"title":"Towards Robust, Reproducible, and Clinically Actionable EEG Biomarkers: Large Open Access EEG Database for Discovery and Out-of-sample Validation.","authors":"Hanneke van Dijk, Mark Koppenberg, Martijn Arns","doi":"10.1177/15500594221120516","DOIUrl":null,"url":null,"abstract":"Recently, one of the most impactful neuroscience findings that forms a fundament of the amyloid hypothesis of Alzheimer was called into question after evidence was presented about image tampering of Western Blots (see for full overview). A 2006 Nature paper which reported that Aβ clumps, also known as plaques, in brain tissue are the primary cause of Azheimer’s symptoms, formed a key piece of evidence of this well-known amyloid hypothesis. The image tampering was not only found in the 2006 study, follow-up investigations uncovered 100’s of doctored images, including at least 70 from the original group, calling much of the evidence behind the amyloid hypothesis into question. This means that research on Alzheimer’s has been misdirected for >16 years, billions of research dollars mis-spent and probably a multitude of that amount spent by the pharmacological industry on development of drugs targeting plaques, for which recently many failed clinical trials were reported. But above all, it led to a lack of progress in improvement of treatments for those patients who need it most. This shocking discovery calls for action and a change in the way we evaluate biomarkers in general, but also within the field of psychophysiology and EEG biomarkers, where many biomarker-studies suffer from small sample sizes as well as a lack of (out-of-sample) validation, rendering them statistically underpowered and the biomarkers unapplicable in practice. As for our own EEG-examples, we now know that various meta-analyses have failed to confirm some of the most wellknown diagnostic EEG biomarkers, such as theta-beta ratio (TBR) in ADHD and frontal alpha asymmetry (FAA) in MDD. This latter meta-analysis indicated earlier conclusions on FAA were driven by underpowered studies, and that only sample sizes of >200 yielded biologically plausible effects. Therefore, large datasets are necessary for meaningful biomarker discovery. Unfortunately, we’ve experienced not being able to replicate our own research ourselves as well. In 2012, we reported on three biomarkers that predicted non-response to rTMS in depression: high fronto-central theta, a low individual alpha peak frequency (iAPF), and a large P300 amplitude at site Pz. Our sample comprised 90 MDD patients, however, in a new sample of 106 patients, replication failed for all three biomarkers. This disappointing experience prompted us to only publish on biomarkers when we conduct out-of-sample validation and replication. The most recent example was Brainmarker-I as a treatment stratification biomarker in ADHD, that included a total of 4249 EEG’s to develop the biomarker, two datasets (N = 472) to investigate its predictive value, and three independent datasets (N = 336) for blinded-out-of-sample validation where predictions of the biomarker are confirmed by an external researcher not involved in the data-analysis. In addition, together with a group of EEG research colleagues, we established the International Consortium on Neuromodulation Discovery of Biomarkers (ICON-DB) that aims to make EEG data from repetitive Transcranial Magnetic Stimulation (rTMS) studies, available for direct replication. This ICON-DB initiative already resulted in a published non-replication and a successful replication of EEG biomarkers. A promising new development in EEG research, also needing large datasets, is the use of artificial intelligence (AI) as an advanced signal processing tool. To successfully employ AI techniques (eg, machine-learning or deeplearning) one should prevent overfitting since this commonly leads to a lack of generalization to unseen data and therefore negates the applicability of the specific AI model. To do this, the total dataset should be sub-divided into trainingand validation-sets (together used to develop a model) and an independent and separately held-out test-sets (to test the generalizability/replicability). Therefore, large amounts of data are imperative. Some studies using large datasets have successfully used AI models, for example to define EEG characteristics to classify sex, neurological EEG pathology, and response to different types of therapy. Unfortunately, the literature is full of EEG-AI studies where no validation test-sets are used, or small samples of N < 50 (or not reported) are used without cross-validation, with “claimed accuracies’ of >90% (for reviews see,) showing this field is also suffering from a reproducibility crisis. Algorithms developed with over-fitted data without validation may not well generalize to the clinic. To aid researchers in development and validation of EEG biomarkers, and development of new (AI) methodologies, we hereby also announce our open access EEG dataset: the Two Decades Brainclinics Research Archive for Insights in Neuroscience (TDBRAIN). TDBRAIN consists of 1274 well phenotyped EEGs from healthy and psychiatric participants Editorial","PeriodicalId":10682,"journal":{"name":"Clinical EEG and Neuroscience","volume":null,"pages":null},"PeriodicalIF":1.6000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical EEG and Neuroscience","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/15500594221120516","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"CLINICAL NEUROLOGY","Score":null,"Total":0}
引用次数: 2
Abstract
Recently, one of the most impactful neuroscience findings that forms a fundament of the amyloid hypothesis of Alzheimer was called into question after evidence was presented about image tampering of Western Blots (see for full overview). A 2006 Nature paper which reported that Aβ clumps, also known as plaques, in brain tissue are the primary cause of Azheimer’s symptoms, formed a key piece of evidence of this well-known amyloid hypothesis. The image tampering was not only found in the 2006 study, follow-up investigations uncovered 100’s of doctored images, including at least 70 from the original group, calling much of the evidence behind the amyloid hypothesis into question. This means that research on Alzheimer’s has been misdirected for >16 years, billions of research dollars mis-spent and probably a multitude of that amount spent by the pharmacological industry on development of drugs targeting plaques, for which recently many failed clinical trials were reported. But above all, it led to a lack of progress in improvement of treatments for those patients who need it most. This shocking discovery calls for action and a change in the way we evaluate biomarkers in general, but also within the field of psychophysiology and EEG biomarkers, where many biomarker-studies suffer from small sample sizes as well as a lack of (out-of-sample) validation, rendering them statistically underpowered and the biomarkers unapplicable in practice. As for our own EEG-examples, we now know that various meta-analyses have failed to confirm some of the most wellknown diagnostic EEG biomarkers, such as theta-beta ratio (TBR) in ADHD and frontal alpha asymmetry (FAA) in MDD. This latter meta-analysis indicated earlier conclusions on FAA were driven by underpowered studies, and that only sample sizes of >200 yielded biologically plausible effects. Therefore, large datasets are necessary for meaningful biomarker discovery. Unfortunately, we’ve experienced not being able to replicate our own research ourselves as well. In 2012, we reported on three biomarkers that predicted non-response to rTMS in depression: high fronto-central theta, a low individual alpha peak frequency (iAPF), and a large P300 amplitude at site Pz. Our sample comprised 90 MDD patients, however, in a new sample of 106 patients, replication failed for all three biomarkers. This disappointing experience prompted us to only publish on biomarkers when we conduct out-of-sample validation and replication. The most recent example was Brainmarker-I as a treatment stratification biomarker in ADHD, that included a total of 4249 EEG’s to develop the biomarker, two datasets (N = 472) to investigate its predictive value, and three independent datasets (N = 336) for blinded-out-of-sample validation where predictions of the biomarker are confirmed by an external researcher not involved in the data-analysis. In addition, together with a group of EEG research colleagues, we established the International Consortium on Neuromodulation Discovery of Biomarkers (ICON-DB) that aims to make EEG data from repetitive Transcranial Magnetic Stimulation (rTMS) studies, available for direct replication. This ICON-DB initiative already resulted in a published non-replication and a successful replication of EEG biomarkers. A promising new development in EEG research, also needing large datasets, is the use of artificial intelligence (AI) as an advanced signal processing tool. To successfully employ AI techniques (eg, machine-learning or deeplearning) one should prevent overfitting since this commonly leads to a lack of generalization to unseen data and therefore negates the applicability of the specific AI model. To do this, the total dataset should be sub-divided into trainingand validation-sets (together used to develop a model) and an independent and separately held-out test-sets (to test the generalizability/replicability). Therefore, large amounts of data are imperative. Some studies using large datasets have successfully used AI models, for example to define EEG characteristics to classify sex, neurological EEG pathology, and response to different types of therapy. Unfortunately, the literature is full of EEG-AI studies where no validation test-sets are used, or small samples of N < 50 (or not reported) are used without cross-validation, with “claimed accuracies’ of >90% (for reviews see,) showing this field is also suffering from a reproducibility crisis. Algorithms developed with over-fitted data without validation may not well generalize to the clinic. To aid researchers in development and validation of EEG biomarkers, and development of new (AI) methodologies, we hereby also announce our open access EEG dataset: the Two Decades Brainclinics Research Archive for Insights in Neuroscience (TDBRAIN). TDBRAIN consists of 1274 well phenotyped EEGs from healthy and psychiatric participants Editorial
期刊介绍:
Clinical EEG and Neuroscience conveys clinically relevant research and development in electroencephalography and neuroscience. Original articles on any aspect of clinical neurophysiology or related work in allied fields are invited for publication.