Madeline G Reinecke, Fransisca Ting, Julian Savulescu, Ilina Singh
{"title":"The Double-Edged Sword of Anthropomorphism in LLMs <sup>†</sup>.","authors":"Madeline G Reinecke, Fransisca Ting, Julian Savulescu, Ilina Singh","doi":"10.3390/proceedings2025114004","DOIUrl":null,"url":null,"abstract":"<p><p>Humans may have evolved to be \"hyperactive agency detectors\". Upon hearing a rustle in a pile of leaves, it would be safer to assume that an agent, like a lion, hides beneath (even if there may ultimately be nothing there). Can this evolutionary cognitive mechanism-and related mechanisms of anthropomorphism-explain some of people's contemporary experience with using chatbots (e.g., ChatGPT, Gemini)? In this paper, we sketch how such mechanisms may engender the seemingly irresistible anthropomorphism of large language-based chatbots. We then explore the implications of this within the educational context. Specifically, we argue that people's tendency to perceive a \"mind in the machine\" is a double-edged sword for educational progress: Though anthropomorphism can facilitate motivation and learning, it may also lead students to trust-and potentially over-trust-content generated by chatbots. To be sure, students do seem to recognize that LLM-generated content may, at times, be inaccurate. We argue, however, that the rise of anthropomorphism towards chatbots will only serve to further camouflage these inaccuracies. We close by considering how research can turn towards aiding students in becoming digitally literate-avoiding the pitfalls caused by perceiving agency and humanlike mental states in chatbots.</p>","PeriodicalId":20443,"journal":{"name":"Proceedings","volume":"114 1","pages":"4"},"PeriodicalIF":0.0000,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7617520/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/proceedings2025114004","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Humans may have evolved to be "hyperactive agency detectors". Upon hearing a rustle in a pile of leaves, it would be safer to assume that an agent, like a lion, hides beneath (even if there may ultimately be nothing there). Can this evolutionary cognitive mechanism-and related mechanisms of anthropomorphism-explain some of people's contemporary experience with using chatbots (e.g., ChatGPT, Gemini)? In this paper, we sketch how such mechanisms may engender the seemingly irresistible anthropomorphism of large language-based chatbots. We then explore the implications of this within the educational context. Specifically, we argue that people's tendency to perceive a "mind in the machine" is a double-edged sword for educational progress: Though anthropomorphism can facilitate motivation and learning, it may also lead students to trust-and potentially over-trust-content generated by chatbots. To be sure, students do seem to recognize that LLM-generated content may, at times, be inaccurate. We argue, however, that the rise of anthropomorphism towards chatbots will only serve to further camouflage these inaccuracies. We close by considering how research can turn towards aiding students in becoming digitally literate-avoiding the pitfalls caused by perceiving agency and humanlike mental states in chatbots.