Lisa M Given, Sarah Polkinghorne, Alexandra Ridgway
{"title":"“我想我之前说错了。我的坏!:探索生成式人工智能工具如何利用社会的情感规则","authors":"Lisa M Given, Sarah Polkinghorne, Alexandra Ridgway","doi":"10.1177/14614448251338276","DOIUrl":null,"url":null,"abstract":"Generative artificial intelligence (GenAI) tools that appear to perform with care and empathy can quickly gain users’ trust. For this reason, GenAI tools that attempt to replicate human responses have heightened potential to misinform and deceive people. This article examines how three GenAI tools, within divergent contexts, mimic credible emotional responsiveness: OpenAI’s ChatGPT, the National Eating Disorder Association’s Tessa and Luka’s Replika. The analysis uses Hochschild’s concept of <jats:italic>feeling rules</jats:italic> to explore how these tools exploit, reinforce or violate people’s internalised social guidelines around appropriate and credible emotional expression. We also examine how GenAI developers’ own beliefs and intentions can create potential social harms and conflict with users. Results show that while GenAI tools enact compliance with basic feeling rules – for example, apologising when an error is noticed – this ability alone may not sustain user interest, particularly once the tools’ inability to generate meaningful, accurate information becomes intolerable.","PeriodicalId":19149,"journal":{"name":"New Media & Society","volume":"7 1","pages":"5525-5545"},"PeriodicalIF":4.3000,"publicationDate":"2025-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"‘I think I misspoke earlier. My bad!’: Exploring how generative artificial intelligence tools exploit society’s feeling rules\",\"authors\":\"Lisa M Given, Sarah Polkinghorne, Alexandra Ridgway\",\"doi\":\"10.1177/14614448251338276\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Generative artificial intelligence (GenAI) tools that appear to perform with care and empathy can quickly gain users’ trust. For this reason, GenAI tools that attempt to replicate human responses have heightened potential to misinform and deceive people. This article examines how three GenAI tools, within divergent contexts, mimic credible emotional responsiveness: OpenAI’s ChatGPT, the National Eating Disorder Association’s Tessa and Luka’s Replika. The analysis uses Hochschild’s concept of <jats:italic>feeling rules</jats:italic> to explore how these tools exploit, reinforce or violate people’s internalised social guidelines around appropriate and credible emotional expression. We also examine how GenAI developers’ own beliefs and intentions can create potential social harms and conflict with users. Results show that while GenAI tools enact compliance with basic feeling rules – for example, apologising when an error is noticed – this ability alone may not sustain user interest, particularly once the tools’ inability to generate meaningful, accurate information becomes intolerable.\",\"PeriodicalId\":19149,\"journal\":{\"name\":\"New Media & Society\",\"volume\":\"7 1\",\"pages\":\"5525-5545\"},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2025-10-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"New Media & Society\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1177/14614448251338276\",\"RegionNum\":1,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMMUNICATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"New Media & Society","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1177/14614448251338276","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMMUNICATION","Score":null,"Total":0}
‘I think I misspoke earlier. My bad!’: Exploring how generative artificial intelligence tools exploit society’s feeling rules
Generative artificial intelligence (GenAI) tools that appear to perform with care and empathy can quickly gain users’ trust. For this reason, GenAI tools that attempt to replicate human responses have heightened potential to misinform and deceive people. This article examines how three GenAI tools, within divergent contexts, mimic credible emotional responsiveness: OpenAI’s ChatGPT, the National Eating Disorder Association’s Tessa and Luka’s Replika. The analysis uses Hochschild’s concept of feeling rules to explore how these tools exploit, reinforce or violate people’s internalised social guidelines around appropriate and credible emotional expression. We also examine how GenAI developers’ own beliefs and intentions can create potential social harms and conflict with users. Results show that while GenAI tools enact compliance with basic feeling rules – for example, apologising when an error is noticed – this ability alone may not sustain user interest, particularly once the tools’ inability to generate meaningful, accurate information becomes intolerable.
期刊介绍:
New Media & Society engages in critical discussions of the key issues arising from the scale and speed of new media development, drawing on a wide range of disciplinary perspectives and on both theoretical and empirical research. The journal includes contributions on: -the individual and the social, the cultural and the political dimensions of new media -the global and local dimensions of the relationship between media and social change -contemporary as well as historical developments -the implications and impacts of, as well as the determinants and obstacles to, media change the relationship between theory, policy and practice.