Martin Maier, Alexander Leonhardt, Florian Blume, Pia Bideau, Olaf Hellwich, Rasha Abdel Rahman
{"title":"Neural dynamics of mental state attribution to social robot faces.","authors":"Martin Maier, Alexander Leonhardt, Florian Blume, Pia Bideau, Olaf Hellwich, Rasha Abdel Rahman","doi":"10.1093/scan/nsaf027","DOIUrl":null,"url":null,"abstract":"<p><p>The interplay of mind attribution and emotional responses is considered crucial in shaping human trust and acceptance of social robots. Understanding this interplay can help us create the right conditions for successful human-robot social interaction in alignment with societal goals. Our study shows that affective information about robots describing positive, negative, or neutral behaviour leads participants (N = 90) to attribute mental states to robot faces, modulating impressions of trustworthiness, facial expression, and intentionality. Electroencephalography recordings from 30 participants revealed that affective information influenced specific processing stages in the brain associated with early face perception (N170 component) and more elaborate stimulus evaluation (late positive potential). However, a modulation of fast emotional brain responses, typically found for human faces (early posterior negativity), was not observed. These findings suggest that neural processing of robot faces alternates between being perceived as mindless machines and intentional agents: people rapidly attribute mental states during perception, literally seeing good or bad intentions in robot faces, but are emotionally less affected than when facing humans. These nuanced insights into the fundamental psychological and neural processes underlying mind attribution can enhance our understanding of human-robot social interactions and inform policies surrounding the moral responsibility of artificial agents.</p>","PeriodicalId":94208,"journal":{"name":"Social cognitive and affective neuroscience","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11969468/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Social cognitive and affective neuroscience","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/scan/nsaf027","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The interplay of mind attribution and emotional responses is considered crucial in shaping human trust and acceptance of social robots. Understanding this interplay can help us create the right conditions for successful human-robot social interaction in alignment with societal goals. Our study shows that affective information about robots describing positive, negative, or neutral behaviour leads participants (N = 90) to attribute mental states to robot faces, modulating impressions of trustworthiness, facial expression, and intentionality. Electroencephalography recordings from 30 participants revealed that affective information influenced specific processing stages in the brain associated with early face perception (N170 component) and more elaborate stimulus evaluation (late positive potential). However, a modulation of fast emotional brain responses, typically found for human faces (early posterior negativity), was not observed. These findings suggest that neural processing of robot faces alternates between being perceived as mindless machines and intentional agents: people rapidly attribute mental states during perception, literally seeing good or bad intentions in robot faces, but are emotionally less affected than when facing humans. These nuanced insights into the fundamental psychological and neural processes underlying mind attribution can enhance our understanding of human-robot social interactions and inform policies surrounding the moral responsibility of artificial agents.