Tyler Mari, S Hasan Ali, Lucrezia Pacinotti, Sarah Powsey, Nicholas Fallon
{"title":"Machine learning classification of active viewing of pain and non-pain images using EEG does not exceed chance in external validation samples.","authors":"Tyler Mari, S Hasan Ali, Lucrezia Pacinotti, Sarah Powsey, Nicholas Fallon","doi":"10.3758/s13415-025-01268-2","DOIUrl":null,"url":null,"abstract":"<p><p>Previous research has demonstrated that machine learning (ML) could not effectively decode passive observation of neutral versus pain photographs by using electroencephalogram (EEG) data. Consequently, the present study explored whether active viewing, i.e., requiring participant engagement in a task, of neutral and pain stimuli improves ML performance. Random forest (RF) models were trained on cortical event-related potentials (ERPs) during a two-alternative forced choice paradigm, whereby participants determined the presence or absence of pain in photographs of facial expressions and action scenes. Sixty-two participants were recruited for the model development sample. Moreover, a within-subject temporal validation sample was collected, consisting of 27 subjects. In line with our previous research, three RF models were developed to classify images into faces and scenes, neutral and pain scenes, and neutral and pain expressions. The results demonstrated that the RF successfully classified discrete categories of visual stimuli (faces and scenes) with accuracies of 78% and 66% on cross-validation and external validation, respectively. However, despite promising cross-validation results of 61% and 67% for the classification of neutral and pain scenes and neutral and pain faces, respectively, the RF models failed to exceed chance performance on the external validation dataset on both empathy classification attempts. These results align with previous research, highlighting the challenges of classifying complex states, such as pain empathy using ERPs. Moreover, the results suggest that active observation fails to enhance ML performance beyond previous passive studies. Future research should prioritise improving model performance to obtain levels exceeding chance, which would demonstrate increased utility.</p>","PeriodicalId":50672,"journal":{"name":"Cognitive Affective & Behavioral Neuroscience","volume":" ","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Affective & Behavioral Neuroscience","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3758/s13415-025-01268-2","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
Previous research has demonstrated that machine learning (ML) could not effectively decode passive observation of neutral versus pain photographs by using electroencephalogram (EEG) data. Consequently, the present study explored whether active viewing, i.e., requiring participant engagement in a task, of neutral and pain stimuli improves ML performance. Random forest (RF) models were trained on cortical event-related potentials (ERPs) during a two-alternative forced choice paradigm, whereby participants determined the presence or absence of pain in photographs of facial expressions and action scenes. Sixty-two participants were recruited for the model development sample. Moreover, a within-subject temporal validation sample was collected, consisting of 27 subjects. In line with our previous research, three RF models were developed to classify images into faces and scenes, neutral and pain scenes, and neutral and pain expressions. The results demonstrated that the RF successfully classified discrete categories of visual stimuli (faces and scenes) with accuracies of 78% and 66% on cross-validation and external validation, respectively. However, despite promising cross-validation results of 61% and 67% for the classification of neutral and pain scenes and neutral and pain faces, respectively, the RF models failed to exceed chance performance on the external validation dataset on both empathy classification attempts. These results align with previous research, highlighting the challenges of classifying complex states, such as pain empathy using ERPs. Moreover, the results suggest that active observation fails to enhance ML performance beyond previous passive studies. Future research should prioritise improving model performance to obtain levels exceeding chance, which would demonstrate increased utility.
期刊介绍:
Cognitive, Affective, & Behavioral Neuroscience (CABN) offers theoretical, review, and primary research articles on behavior and brain processes in humans. Coverage includes normal function as well as patients with injuries or processes that influence brain function: neurological disorders, including both healthy and disordered aging; and psychiatric disorders such as schizophrenia and depression. CABN is the leading vehicle for strongly psychologically motivated studies of brain–behavior relationships, through the presentation of papers that integrate psychological theory and the conduct and interpretation of the neuroscientific data. The range of topics includes perception, attention, memory, language, problem solving, reasoning, and decision-making; emotional processes, motivation, reward prediction, and affective states; and individual differences in relevant domains, including personality. Cognitive, Affective, & Behavioral Neuroscience is a publication of the Psychonomic Society.