Ying Xu , Trisha Thomas , Chi-Lin Yu , Echo Zexuan Pan
{"title":"What makes children perceive or not perceive minds in generative AI?","authors":"Ying Xu , Trisha Thomas , Chi-Lin Yu , Echo Zexuan Pan","doi":"10.1016/j.chbah.2025.100135","DOIUrl":null,"url":null,"abstract":"<div><div>Children are increasingly engaging in dialogue and interactions with generative AI agents that can mimic human behaviors, raising questions about how children perceive and communicate with AI compared to humans. In an experimental study with 119 children aged 4–8, participants co-created stories in three conditions: with a generative AI agent via a speaker, with a physically present human partner, or with a human partner who was hidden and audible only through a speaker. Results showed a clear distinction in children's communication and perception of visible human partners compared to AI. Nuanced differences also emerged in children's perceptions of hidden human partners versus AI. When physical appearance was absent, children relied on linguistic and paralinguistic cues to assess human-likeness and form perceptions, but physical appearance became a more dominant factor when available. These results shed light on implications for the design of child-facing AI technologies, offering insights into how speech and physical features can be optimized to meet children's developmental and communicative needs.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100135"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000192","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Children are increasingly engaging in dialogue and interactions with generative AI agents that can mimic human behaviors, raising questions about how children perceive and communicate with AI compared to humans. In an experimental study with 119 children aged 4–8, participants co-created stories in three conditions: with a generative AI agent via a speaker, with a physically present human partner, or with a human partner who was hidden and audible only through a speaker. Results showed a clear distinction in children's communication and perception of visible human partners compared to AI. Nuanced differences also emerged in children's perceptions of hidden human partners versus AI. When physical appearance was absent, children relied on linguistic and paralinguistic cues to assess human-likeness and form perceptions, but physical appearance became a more dominant factor when available. These results shed light on implications for the design of child-facing AI technologies, offering insights into how speech and physical features can be optimized to meet children's developmental and communicative needs.