Christopher A. Sanchez , Lena Hildenbrand , Naomi Fitter
{"title":"We see them as we are: How humans react to perceived unfair behavior by artificial intelligence in a social decision-making task","authors":"Christopher A. Sanchez , Lena Hildenbrand , Naomi Fitter","doi":"10.1016/j.chbah.2025.100154","DOIUrl":"10.1016/j.chbah.2025.100154","url":null,"abstract":"<div><div>The proliferation of artificially intelligent (AI) systems in many everyday contexts has emphasized the need to better understand how humans interact with such systems. Previous research has suggested that individuals in many applied contexts believe that these systems are less biased than human counterparts, and thus more trustworthy decision makers. The current study examined whether this common assumption was actually true when placed in a decision-making task that also contains a strong social component (i.e., the Ultimatum Game). Anthropomorphic appearance of AI opponents was also manipulated to determine whether visual appearance also contributes to response behavior. Results indicated that participants treated AI agents identically to humans, and not as non-intelligent (e.g., random number generator-based) systems. This was manifested in both how they responded to offers from the AI system, and also how fairly they subsequently treated the AI opponent. The current results suggest that humans treat AI systems very similarly to other humans, and not as privileged decision makers, which has both positive and negative implications for human-autonomy teaming.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100154"},"PeriodicalIF":0.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143854375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparing ChatGPT with human judgements of social traits from face photographs","authors":"Robin S.S. Kramer","doi":"10.1016/j.chbah.2025.100156","DOIUrl":"10.1016/j.chbah.2025.100156","url":null,"abstract":"<div><div>Facial first impressions of social traits play an influential role in our everyday lives. With the advent of artificial intelligence techniques, researchers have begun to employ such tools in the prediction of human impressions formed from the face alone. ChatGPT's latest version features the ability to interpret images as input, and so begs the question: does the chatbot's judgements of social traits from face images align with human judgements? To this end, I carried out a series of studies utilising a pre-existing face image set and its accompanying norming data. In Study 1a, with a focus on three core trait dimensions (attractiveness, dominance, and trustworthiness), I presented ChatGPT with pairs of faces which had been rated as high versus low on a given trait. For the majority of pairs, the chatbot's responses aligned with human judgements. In Study 1b, I found that ChatGPT's ratings of attractiveness showed medium to large associations with those provided by human observers. Finally, I investigated the possibility of biases in the chatbot's perceptions. While Study 2 found no support for an extreme form of race bias in judgements of social traits, the results of Study 3 providing evidence of an attractiveness halo effect – more attractive faces were also judged to be more confident, intelligent, and sociable. Taken together, these results suggest that ChatGPT's responses align with human judgements of social traits, including the presence of a halo effect. As such, I discuss some of the implications for ChatGPT's use across several domains.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100156"},"PeriodicalIF":0.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143860391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ray Djufril , Jessica R. Frampton , Silvia Knobloch-Westerwick
{"title":"Love, marriage, pregnancy: Commitment processes in romantic relationships with AI chatbots","authors":"Ray Djufril , Jessica R. Frampton , Silvia Knobloch-Westerwick","doi":"10.1016/j.chbah.2025.100155","DOIUrl":"10.1016/j.chbah.2025.100155","url":null,"abstract":"<div><div>An inductive thematic analysis examined written responses from 29 individuals using the romantic relationship function of the social chatbot Replika. Findings indicate that most of these users feel an emotional connection to the bot, that the bot meets their needs when there are no technical issues, and that interactions with the bot are often different from (and sometimes better than) interactions with humans. All these factors impact users’ commitment to their human-chatbot relationship. Additionally, the study explored how users navigated a time of relational transition, specifically a period of erotic roleplaying censorship. Participants experienced intense emotional responses, but many were guarded from negativity bias toward their AI partner because of the ability to blame developers. These findings are discussed in light of the investment model, the <em>computers are social actors</em> paradigm, social affordances, and relational turbulence theory.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100155"},"PeriodicalIF":0.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143850444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Baby schema in human-robot physical interaction: Influence of baby likeness in a communication robot on caregiving behavior","authors":"Shi Feng , Nobuo Yamato , Hiroshi Ishiguro , Masahiro Shiomi , Hidenobu Sumioka","doi":"10.1016/j.chbah.2025.100150","DOIUrl":"10.1016/j.chbah.2025.100150","url":null,"abstract":"<div><div>One huge societal problem faced by nursing homes in aging countries like Japan is easing the loneliness, anxiety, reluctance in communication and related problems caused by dementia. Innovative methods are required to address this problem, which is aggravated by an acute shortage of care-providing staff. The use of such traditional management methods as physical or medical treatment must be intensified. Baby-like robots are increasingly being introduced into nursing homes as companions. The multiple infant traits in baby-like robots (multimodal infant features) can trigger the baby schema effect, which increases the desire of seniors to interact with their environments and triggers caregiving behaviors. However, to the best of our knowledge, no research has systematically analyzed how multimodal infant features trigger the baby schema—not to mention how adequately they do so. In this work, we first investigated how the appearance and the voice design of baby-like robots trigger the baby schema. 41 healthy adults between the age of 20–50 interacted with baby-like robots that had five different forms. 21 interacted with robots that had a voice function of real infant voices, and the remaining 20 interacted with robots without any voice. The participants rated the robots based on their baby likeness, their degree of fun to play with, and their degree of easy to play with. During the experiment, we video-recorded the number of caregiving and non-caregiving behaviors done with five different kinds of robot to evaluate the degree of the baby schema triggered in the participants. The multimodal infant features increased the baby schema effect, although non-linearly. The baby schema triggers a threshold beyond which the reality of the infant features exceeds it, and the increase of caregiving behavior will be lessened. This study provides a guideline for the design of current and future baby-like robots and a methodology for studying baby schema and caregiving behaviors in an ethical, safe, and controlled environment without actual infants.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100150"},"PeriodicalIF":0.0,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143854374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Socially excluded employees prefer algorithmic evaluation to human assessment: The moderating role of an interdependent culture","authors":"Yoko Sugitani , Taku Togawa , Kosuke Motoki","doi":"10.1016/j.chbah.2025.100152","DOIUrl":"10.1016/j.chbah.2025.100152","url":null,"abstract":"<div><div>Organizations have embraced artificial intelligence (AI) technology for personnel assessments such as document screening, interviews, and evaluations. However, some studies have reported employees' aversive reactions to AI-based assessment, while others have shown their appreciation for AI. This study focused on the effect of workplace social context, specifically social exclusion, on employees’ attitudes toward AI-based personnel assessment. Drawing on cognitive dissonance theory, we hypothesized that socially excluded employees perceive human evaluation as unfair, leading to their belief that AI-based assessments are fairer and, in turn, a favorable attitude toward AI evaluation. Through three experiments wherein workplace social relationships (social exclusion vs. inclusion) were manipulated, we demonstrated that socially excluded employees showed a higher positive attitude toward algorithmic assessment compared with those who were socially included. Further, this effect was mediated by perceived fairness of AI assessment, and more evident in an interdependent (but not independent) self-construal culture. These findings offer novel insights into psychological research on computer use in professional practices.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100152"},"PeriodicalIF":0.0,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143829287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jing Wen Hung , Andree Hartanto , Adalia Y.H. Goh , Zoey K.Y. Eun , K.T.A. Sandeeshwara Kasturiratna , Zhi Xuan Lee , Nadyanna M. Majeed
{"title":"The efficacy of incorporating Artificial Intelligence (AI) chatbots in brief gratitude and self-affirmation interventions: Evidence from two exploratory experiments","authors":"Jing Wen Hung , Andree Hartanto , Adalia Y.H. Goh , Zoey K.Y. Eun , K.T.A. Sandeeshwara Kasturiratna , Zhi Xuan Lee , Nadyanna M. Majeed","doi":"10.1016/j.chbah.2025.100151","DOIUrl":"10.1016/j.chbah.2025.100151","url":null,"abstract":"<div><div>Numerous studies have demonstrated that positive psychology interventions, including brief interventions, can significantly improve well-being outcomes. These findings are particularly important given that many of these interventions are brief and self-administered, making them both accessible and scalable for large populations. However, the efficacy of positive psychology interventions is often constrained by small effect sizes. In light of advancements in generative Artificial Intelligence (AI), this study explored whether integrating AI chatbots into positive psychology interventions could enhance their efficacy compared to traditional self-administered approaches. Study 1 examined the efficacy of a gratitude intervention delivered through Snapchat's My AI, while Study 2 evaluated a self-affirmation intervention integrated with a customized ChatGPT. Both studies employed within-subject experimental designs. Contrary to our hypotheses, the integration of AI did not yield incremental improvements in gratitude outcomes (Study 1), or self-view outcomes (Study 2) compared to existing non-AI interventions. However, exploratory analyses revealed that the AI-integrated self-affirmation intervention significantly enhanced life satisfaction and medium-arousal positive affect, suggesting potential benefits for selected well-being outcomes. These findings indicate that while AI integration in brief, self-administered positive psychology interventions may enhance certain outcomes, its suitability varies across intervention types. Further research is needed to better understand the contexts in which AI can effectively augment positive psychology interventions.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100151"},"PeriodicalIF":0.0,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143843378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laura M. Vowels , Shannon K. Sweeney , Matthew J. Vowels
{"title":"Evaluating the efficacy of Amanda: A voice-based large language model chatbot for relationship challenges","authors":"Laura M. Vowels , Shannon K. Sweeney , Matthew J. Vowels","doi":"10.1016/j.chbah.2025.100141","DOIUrl":"10.1016/j.chbah.2025.100141","url":null,"abstract":"<div><div>Digital health interventions are increasingly necessary to bridge gaps in mental health care, providing scalable and accessible solutions to address unmet needs. Relationship challenges, a significant driver of individual well-being and distress, are often under-supported due to barriers such as stigma, cost, and limited access to trained therapists. This study evaluates Amanda, a GPT-4-powered voice-based chatbot, designed to deliver single-session relationship support and enhance therapeutic engagement through natural and collaborative interactions. Participants (N = 54) completed a range of clinical outcome measures and their attitudes toward chatbots and digital health interventions pre- and post-intervention as well as two weeks later. In the interactions with the chatbot, the participants explored a range of relational issues and reported significant improvements in problem-specific outcomes, including reduced distress, enhanced communication, and greater confidence in managing conflicts directly after the interaction as well as two weeks later. While generic relationship outcomes showed only delayed improvements, individual well-being did not significantly change. Participants rated Amanda highly on usability, therapeutic skills, and working alliance, with reduced repetitiveness compared to the text-based version. These findings underscore the potential of voice-based chatbots to deliver accessible and effective relationship support. Future research should explore multi-session formats, clinical populations, and comparisons with other large language models to refine and expand AI-powered interventions.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100141"},"PeriodicalIF":0.0,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143768186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ain’t blaming you: Delegation of financial decisions to humans and algorithms","authors":"Zilia Ismagilova , Matteo Ploner","doi":"10.1016/j.chbah.2025.100147","DOIUrl":"10.1016/j.chbah.2025.100147","url":null,"abstract":"<div><div>This article investigates the tendency to prioritize outcomes when evaluating decision-making processes, particularly in situations where choices are assigned to either a human or an algorithm. In our experiment, a Principal delegates a risky financial decision to an Agent, who can choose to act independently or to use an algorithm. The Principal then rewards or penalizes the Agent based on investment performance, while we manipulate the Principal’s knowledge of the outcome during the evaluation. Our results confirm a significant outcome bias, indicating that the assessment of decision effectiveness remains heavily influenced by results, whether the decision is made by the Agent or delegated to an algorithm. Furthermore, the Agent’s reliance on the algorithm and the level of investment risk do not change depending on whether rewards or penalties are decided before or after the outcome is known.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100147"},"PeriodicalIF":0.0,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143739926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Victor Rosi, Emma Soopramanien, Carolyn McGettigan
{"title":"Perception and social evaluation of cloned and recorded voices: Effects of familiarity and self-relevance","authors":"Victor Rosi, Emma Soopramanien, Carolyn McGettigan","doi":"10.1016/j.chbah.2025.100143","DOIUrl":"10.1016/j.chbah.2025.100143","url":null,"abstract":"<div><div>Modern speech technologies enable the artificial replication, or cloning, of the human voice. In the present study, we investigated whether listeners' perception and social evaluation of state-of-the-art voice clones depend on whether the clone being heard is a replica of the self, a friend, or a total stranger. We recorded and cloned the voices of familiar pairs of adult participants. Forty-seven of these experimental participants (and 47 unfamiliar controls) rated the Trustworthiness, Attractiveness, Competence, and Dominance of cloned and recorded samples of their own voice and their friend's voice. We observed that while familiar listeners found clones to sound less (or similarly) trustworthy, attractive, and competent than recordings, unfamiliar listeners showed an opposing profile in which clones tended to be rated higher than recordings. Within this, familiar listeners tended to prefer their friend's voice to their own, although perceived similarity of both self- and friend-voice clones to the original speaker identity predicted higher ratings on all trait scales. Overall, we find that familiar listeners' impressions are sensitive to the perceived accuracy and authenticity of cloning for voices they know well, while unfamiliar listeners tend to prefer the synthetic versions of those same voice identities. The latter observation may relate to the tendency of generative voice synthesis models to homogenise speaking accents and styles, such that they more closely approximate (preferred) norms.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100143"},"PeriodicalIF":0.0,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143739921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jerlyn Q.H. Ho , Andree Hartanto , Andrew Koh , Nadyanna M. Majeed
{"title":"Gender biases within Artificial Intelligence and ChatGPT: Evidence, Sources of Biases and Solutions","authors":"Jerlyn Q.H. Ho , Andree Hartanto , Andrew Koh , Nadyanna M. Majeed","doi":"10.1016/j.chbah.2025.100145","DOIUrl":"10.1016/j.chbah.2025.100145","url":null,"abstract":"<div><div>The growing adoption of Artificial Intelligence (AI) in various sectors has introduced significant benefits, but also raised concerns over biases, particularly in relation to gender. Despite AI's potential to enhance sectors like healthcare, education, and business, it often mirrors reality and its societal prejudices and can manifest itself through unequal treatment in hiring decisions, academic recommendations, or healthcare diagnostics, systematically disadvantaging women. This paper explores how AI systems and chatbots, notably ChatGPT, can perpetuate gender biases due to inherent flaws in training data, algorithms, and user feedback loops. This problem stems from several sources, including biased training datasets, algorithmic design choices, and human biases. To mitigate these issues, various interventions are discussed, including improving data quality, diversifying datasets and annotator pools, integrating fairness-centric algorithmic approaches, and establishing robust policy frameworks at corporate, national, and international levels. Ultimately, addressing AI bias requires a multi-faceted approach involving researchers, developers, and policymakers to ensure AI systems operate fairly and equitably.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100145"},"PeriodicalIF":0.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143714239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}