Gavin Heindorf, Amanda Holbrook, Bohyun Park, Gregory A Light, Philippe Rast, Dan Foti, Roman Kotov, Peter E Clayson
{"title":"Impact of ERP Reliability Cutoffs on Sample Characteristics and Effect Sizes: Performance-Monitoring ERPs in Psychosis and Healthy Controls.","authors":"Gavin Heindorf, Amanda Holbrook, Bohyun Park, Gregory A Light, Philippe Rast, Dan Foti, Roman Kotov, Peter E Clayson","doi":"10.1111/psyp.14758","DOIUrl":null,"url":null,"abstract":"<p><p>In studies of event-related brain potentials (ERPs), it is common practice to exclude participants for having too few trials for analysis to ensure adequate score reliability (i.e., internal consistency). However, in research involving clinical samples, the impact of increasingly rigorous reliability standards on factors such as sample generalizability, patient versus control effect sizes, and effect sizes for within-group correlations with external variables is unclear. This study systematically evaluated whether different ERP reliability cutoffs impacted these factors in psychosis. Error-related negativity (ERN) and error positivity (Pe) were assessed during a modified flanker task in 97 patients with psychosis and 104 healthy comparison participants, who also completed measures of cognition and psychiatric symptoms. ERP reliability cutoffs had notably different effects on the factors considered. A recommended reliability cutoff of 0.80 resulted in sample bias due to systematic exclusion of patients with relatively few task errors, lower reported psychiatric symptoms, and higher levels of cognitive functioning. ERP score reliability lower than 0.80 resulted in generally smaller between- and within-group effect sizes, likely misrepresenting effect sizes. Imposing rigorous ERP reliability standards in studies of psychotic disorders might exclude high-functioning patients, which raises important considerations for the generalizability of clinical ERP research. Moving forward, we recommend examining characteristics of excluded participants, optimizing paradigms and processing pipelines for use in clinical samples, justifying reliability thresholds, and routinely reporting score reliability of all measurements, ERP or otherwise, used to examine individual differences, especially in clinical research.</p>","PeriodicalId":20913,"journal":{"name":"Psychophysiology","volume":"62 2","pages":"e14758"},"PeriodicalIF":2.9000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11839182/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Psychophysiology","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1111/psyp.14758","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
In studies of event-related brain potentials (ERPs), it is common practice to exclude participants for having too few trials for analysis to ensure adequate score reliability (i.e., internal consistency). However, in research involving clinical samples, the impact of increasingly rigorous reliability standards on factors such as sample generalizability, patient versus control effect sizes, and effect sizes for within-group correlations with external variables is unclear. This study systematically evaluated whether different ERP reliability cutoffs impacted these factors in psychosis. Error-related negativity (ERN) and error positivity (Pe) were assessed during a modified flanker task in 97 patients with psychosis and 104 healthy comparison participants, who also completed measures of cognition and psychiatric symptoms. ERP reliability cutoffs had notably different effects on the factors considered. A recommended reliability cutoff of 0.80 resulted in sample bias due to systematic exclusion of patients with relatively few task errors, lower reported psychiatric symptoms, and higher levels of cognitive functioning. ERP score reliability lower than 0.80 resulted in generally smaller between- and within-group effect sizes, likely misrepresenting effect sizes. Imposing rigorous ERP reliability standards in studies of psychotic disorders might exclude high-functioning patients, which raises important considerations for the generalizability of clinical ERP research. Moving forward, we recommend examining characteristics of excluded participants, optimizing paradigms and processing pipelines for use in clinical samples, justifying reliability thresholds, and routinely reporting score reliability of all measurements, ERP or otherwise, used to examine individual differences, especially in clinical research.
期刊介绍:
Founded in 1964, Psychophysiology is the most established journal in the world specifically dedicated to the dissemination of psychophysiological science. The journal continues to play a key role in advancing human neuroscience in its many forms and methodologies (including central and peripheral measures), covering research on the interrelationships between the physiological and psychological aspects of brain and behavior. Typically, studies published in Psychophysiology include psychological independent variables and noninvasive physiological dependent variables (hemodynamic, optical, and electromagnetic brain imaging and/or peripheral measures such as respiratory sinus arrhythmia, electromyography, pupillography, and many others). The majority of studies published in the journal involve human participants, but work using animal models of such phenomena is occasionally published. Psychophysiology welcomes submissions on new theoretical, empirical, and methodological advances in: cognitive, affective, clinical and social neuroscience, psychopathology and psychiatry, health science and behavioral medicine, and biomedical engineering. The journal publishes theoretical papers, evaluative reviews of literature, empirical papers, and methodological papers, with submissions welcome from scientists in any fields mentioned above.