{"title":"“Eh? Aye!”: Categorisation bias for natural human vs AI-augmented voices is influenced by dialect","authors":"Neil W. Kirk","doi":"10.1016/j.chbah.2025.100153","DOIUrl":"10.1016/j.chbah.2025.100153","url":null,"abstract":"<div><div>Advances in AI-assisted voice technology have made it easier to clone or disguise voices, creating a wide range of synthetic voices using different accents, dialects, and languages. While these developments offer positive applications, they also pose risks for misuse. This raises the question as to whether listeners can reliably distinguish between human and AI-enhanced speech and whether prior experiences and expectations about language varieties that are traditionally less-represented by technology affect this ability. Two experiments were conducted to investigate listeners’ ability to categorise voices as human or AI-enhanced in both a standard and a regional Scottish dialect. Using a Signal Detection Theory framework, both experiments explored participants' sensitivity and categorisation biases. In Experiment 1 (<em>N</em> = 100), a predominantly Scottish sample showed above-chance performance in distinguishing between human and AI-enhanced voices, but there was no significant effect of dialect on sensitivity. However, listeners exhibited a bias toward categorising voices as “human”, which was concentrated within the regional Dundonian Scots dialect. In Experiment 2 (<em>N</em> = 100) participants from southern and eastern England, demonstrated reduced overall sensitivity and a <em>Human Categorisation Bias</em> that was more evenly spread across the two dialects. These findings have implications for the growing use of AI-assisted voice technology in linguistically diverse contexts, highlighting both the potential for enhanced representation of Minority, Indigenous, Non-standard and Dialect (MIND) varieties, and the risks of AI misuse.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100153"},"PeriodicalIF":0.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143833546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Socially excluded employees prefer algorithmic evaluation to human assessment: The moderating role of an interdependent culture","authors":"Yoko Sugitani , Taku Togawa , Kosuke Motoki","doi":"10.1016/j.chbah.2025.100152","DOIUrl":"10.1016/j.chbah.2025.100152","url":null,"abstract":"<div><div>Organizations have embraced artificial intelligence (AI) technology for personnel assessments such as document screening, interviews, and evaluations. However, some studies have reported employees' aversive reactions to AI-based assessment, while others have shown their appreciation for AI. This study focused on the effect of workplace social context, specifically social exclusion, on employees’ attitudes toward AI-based personnel assessment. Drawing on cognitive dissonance theory, we hypothesized that socially excluded employees perceive human evaluation as unfair, leading to their belief that AI-based assessments are fairer and, in turn, a favorable attitude toward AI evaluation. Through three experiments wherein workplace social relationships (social exclusion vs. inclusion) were manipulated, we demonstrated that socially excluded employees showed a higher positive attitude toward algorithmic assessment compared with those who were socially included. Further, this effect was mediated by perceived fairness of AI assessment, and more evident in an interdependent (but not independent) self-construal culture. These findings offer novel insights into psychological research on computer use in professional practices.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100152"},"PeriodicalIF":0.0,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143829287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laura M. Vowels , Shannon K. Sweeney , Matthew J. Vowels
{"title":"Evaluating the efficacy of Amanda: A voice-based large language model chatbot for relationship challenges","authors":"Laura M. Vowels , Shannon K. Sweeney , Matthew J. Vowels","doi":"10.1016/j.chbah.2025.100141","DOIUrl":"10.1016/j.chbah.2025.100141","url":null,"abstract":"<div><div>Digital health interventions are increasingly necessary to bridge gaps in mental health care, providing scalable and accessible solutions to address unmet needs. Relationship challenges, a significant driver of individual well-being and distress, are often under-supported due to barriers such as stigma, cost, and limited access to trained therapists. This study evaluates Amanda, a GPT-4-powered voice-based chatbot, designed to deliver single-session relationship support and enhance therapeutic engagement through natural and collaborative interactions. Participants (N = 54) completed a range of clinical outcome measures and their attitudes toward chatbots and digital health interventions pre- and post-intervention as well as two weeks later. In the interactions with the chatbot, the participants explored a range of relational issues and reported significant improvements in problem-specific outcomes, including reduced distress, enhanced communication, and greater confidence in managing conflicts directly after the interaction as well as two weeks later. While generic relationship outcomes showed only delayed improvements, individual well-being did not significantly change. Participants rated Amanda highly on usability, therapeutic skills, and working alliance, with reduced repetitiveness compared to the text-based version. These findings underscore the potential of voice-based chatbots to deliver accessible and effective relationship support. Future research should explore multi-session formats, clinical populations, and comparisons with other large language models to refine and expand AI-powered interventions.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100141"},"PeriodicalIF":0.0,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143768186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ain’t blaming you: Delegation of financial decisions to humans and algorithms","authors":"Zilia Ismagilova , Matteo Ploner","doi":"10.1016/j.chbah.2025.100147","DOIUrl":"10.1016/j.chbah.2025.100147","url":null,"abstract":"<div><div>This article investigates the tendency to prioritize outcomes when evaluating decision-making processes, particularly in situations where choices are assigned to either a human or an algorithm. In our experiment, a Principal delegates a risky financial decision to an Agent, who can choose to act independently or to use an algorithm. The Principal then rewards or penalizes the Agent based on investment performance, while we manipulate the Principal’s knowledge of the outcome during the evaluation. Our results confirm a significant outcome bias, indicating that the assessment of decision effectiveness remains heavily influenced by results, whether the decision is made by the Agent or delegated to an algorithm. Furthermore, the Agent’s reliance on the algorithm and the level of investment risk do not change depending on whether rewards or penalties are decided before or after the outcome is known.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100147"},"PeriodicalIF":0.0,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143739926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Victor Rosi, Emma Soopramanien, Carolyn McGettigan
{"title":"Perception and social evaluation of cloned and recorded voices: Effects of familiarity and self-relevance","authors":"Victor Rosi, Emma Soopramanien, Carolyn McGettigan","doi":"10.1016/j.chbah.2025.100143","DOIUrl":"10.1016/j.chbah.2025.100143","url":null,"abstract":"<div><div>Modern speech technologies enable the artificial replication, or cloning, of the human voice. In the present study, we investigated whether listeners' perception and social evaluation of state-of-the-art voice clones depend on whether the clone being heard is a replica of the self, a friend, or a total stranger. We recorded and cloned the voices of familiar pairs of adult participants. Forty-seven of these experimental participants (and 47 unfamiliar controls) rated the Trustworthiness, Attractiveness, Competence, and Dominance of cloned and recorded samples of their own voice and their friend's voice. We observed that while familiar listeners found clones to sound less (or similarly) trustworthy, attractive, and competent than recordings, unfamiliar listeners showed an opposing profile in which clones tended to be rated higher than recordings. Within this, familiar listeners tended to prefer their friend's voice to their own, although perceived similarity of both self- and friend-voice clones to the original speaker identity predicted higher ratings on all trait scales. Overall, we find that familiar listeners' impressions are sensitive to the perceived accuracy and authenticity of cloning for voices they know well, while unfamiliar listeners tend to prefer the synthetic versions of those same voice identities. The latter observation may relate to the tendency of generative voice synthesis models to homogenise speaking accents and styles, such that they more closely approximate (preferred) norms.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100143"},"PeriodicalIF":0.0,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143739921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jerlyn Q.H. Ho , Andree Hartanto , Andrew Koh , Nadyanna M. Majeed
{"title":"Gender biases within Artificial Intelligence and ChatGPT: Evidence, Sources of Biases and Solutions","authors":"Jerlyn Q.H. Ho , Andree Hartanto , Andrew Koh , Nadyanna M. Majeed","doi":"10.1016/j.chbah.2025.100145","DOIUrl":"10.1016/j.chbah.2025.100145","url":null,"abstract":"<div><div>The growing adoption of Artificial Intelligence (AI) in various sectors has introduced significant benefits, but also raised concerns over biases, particularly in relation to gender. Despite AI's potential to enhance sectors like healthcare, education, and business, it often mirrors reality and its societal prejudices and can manifest itself through unequal treatment in hiring decisions, academic recommendations, or healthcare diagnostics, systematically disadvantaging women. This paper explores how AI systems and chatbots, notably ChatGPT, can perpetuate gender biases due to inherent flaws in training data, algorithms, and user feedback loops. This problem stems from several sources, including biased training datasets, algorithmic design choices, and human biases. To mitigate these issues, various interventions are discussed, including improving data quality, diversifying datasets and annotator pools, integrating fairness-centric algorithmic approaches, and establishing robust policy frameworks at corporate, national, and international levels. Ultimately, addressing AI bias requires a multi-faceted approach involving researchers, developers, and policymakers to ensure AI systems operate fairly and equitably.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100145"},"PeriodicalIF":0.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143714239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"“Always check important information!” - The role of disclaimers in the perception of AI-generated content","authors":"Angelica Lermann Henestrosa , Joachim Kimmerle","doi":"10.1016/j.chbah.2025.100142","DOIUrl":"10.1016/j.chbah.2025.100142","url":null,"abstract":"<div><div>Generative AI, and large language models (LLMs) in particular, have become a prevalent source of digital content. Despite their widespread availability, these models come with critical weaknesses, such as a lack of factual accuracy. Being informed about the advantages and disadvantages of these tools is essential for using AI safely and adequately, yet not everyone is aware of them. Therefore, we explored in three experimental studies how disclaimers affect people's perceptions of AI-authorship and AI-generated content on scientific topics. Additionally, we investigated the impact of information presentation and authorship attributions—whether content is authored solely by AI or co-authored with humans. Across the experiments, no effects of disclaimer type on text perceptions and only minor effects on authorship perceptions were found. In Study 1, an evaluative (vs. neutral) information presentation decreased credibility perceptions, while informing about AI's strengths vs. limitations did not. In addition, we found participants to believe in the machine heuristic, that is, to attribute more accuracy and less bias to AI than to human authors. Study 2 revealed interaction effects between authorship and disclaimer type, providing insights into possible balancing effects of human-AI co-authorship. In Study 3, both strengths and limitations disclaimers induced higher credibility ratings than basic disclaimers. This research suggests that disclaimers fail to univocally influence the perception of AI-generated output. Further interventions should be developed to raise awareness of the capabilities and limitations of LLMs and to advocate for ethical practices in handling AI-generated content, especially regarding factual information.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100142"},"PeriodicalIF":0.0,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143697985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
He Zhang (Albert) , Chuhao Wu , Jingyi Xie , Yao Lyu , Jie Cai , John M. Carroll
{"title":"Harnessing the power of AI in qualitative research: Exploring, using and redesigning ChatGPT","authors":"He Zhang (Albert) , Chuhao Wu , Jingyi Xie , Yao Lyu , Jie Cai , John M. Carroll","doi":"10.1016/j.chbah.2025.100144","DOIUrl":"10.1016/j.chbah.2025.100144","url":null,"abstract":"<div><div>AI tools, particularly large-scale language model (LLM) based applications such as ChatGPT, have the potential to mitigate qualitative research workload. In this study, we conducted semi-structured interviews with 17 participants and held a co-design session with 13 qualitative researchers to develop a framework for designing prompts specifically crafted to support junior researchers and stakeholders interested in leveraging AI for qualitative research. Our findings indicate that improving transparency, providing guidance on prompts, and strengthening users' understanding of LLMs' capabilities significantly enhance their ability to interact with ChatGPT. By comparing researchers' attitudes toward LLM-supported qualitative analysis before and after the co-design process, we reveal that the shift from an initially negative to a positive perception is driven by increased familiarity with the LLM's capabilities and the implementation of prompt engineering techniques that enhance response transparency and, in turn, foster greater trust. This research not only highlights the importance of well-designed prompts in LLM applications but also offers reflections for qualitative researchers on the perception of AI's role. Finally, we emphasize the potential ethical risks and the impact of constructing AI ethical expectations by researchers, particularly those who are novices, on future research and AI development.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100144"},"PeriodicalIF":0.0,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143706351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ori Fartook , Zachary McKendrick , Tal Oron-Gilad , Jessica R. Cauchard
{"title":"Enhancing emotional support in human-robot interaction: Implementing emotion regulation mechanisms in a personal drone","authors":"Ori Fartook , Zachary McKendrick , Tal Oron-Gilad , Jessica R. Cauchard","doi":"10.1016/j.chbah.2025.100146","DOIUrl":"10.1016/j.chbah.2025.100146","url":null,"abstract":"<div><div>We propose that social robots can enhance their social abilities by supporting peoples' emotional needs. We examined this concept by implementing four different mechanisms aimed at providing Emotional Support in a personal drone. These mechanisms (Affective Empathy, Cognitive Empathy, Positive Emotion Regulation (PER), and a Reasoning mechanism (yoU-turn)) provide various aspects of support ranging on the Emotional-Reasoning spectrum. In an online study (<em>N</em> = 95), first, participants were asked to sequentially recall situations where they experienced one of six emotional states (i.e., being calm, bored, excited, hyperactivated, scared, or sleepy).</div><div>Following each induced emotion, participants ranked their preferred drone response to their specific emotional state. Results indicate that participants' preferences were based on the valence of their emotional state, emphasizing the need for social drones to have multiple response mechanisms to support their users. This work contributes to the field of human-robot interaction by implementing validated support mechanisms into a robotic system as its emotional responses.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100146"},"PeriodicalIF":0.0,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143681878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peidong Mei , Deborah N. Brewis , Fortune Nwaiwu , Deshan Sumanathilaka , Fernando Alva-Manchego , Joanna Demaree-Cotton
{"title":"If ChatGPT can do it, where is my creativity? generative AI boosts performance but diminishes experience in creative writing","authors":"Peidong Mei , Deborah N. Brewis , Fortune Nwaiwu , Deshan Sumanathilaka , Fernando Alva-Manchego , Joanna Demaree-Cotton","doi":"10.1016/j.chbah.2025.100140","DOIUrl":"10.1016/j.chbah.2025.100140","url":null,"abstract":"<div><div>As generative AI (GenAI) becomes more sophisticated, it is increasingly being used as a tool to enhance creative expression and innovation. Along with its potential benefits, it is imperative that we examine pitfalls in how generative AI may affect the quality of creative thinking and possibly lead to a narrowing of diversity both in representation and thought. In this study, we employed an experimental design with 225 university students who completed a creative writing task with pre- and post-task surveys to assess ChatGPT's impact on their performance and experiences compared to a control group who did not use ChatGPT. Results show that using ChatGPT enhanced creativity of output and reduced the difficulty and effort required for the task, particularly for non-native English speakers. However, it also diminished the value and enjoyment of the task and raised moral concerns. We contribute to the nascent literature on GenAI by showing how ChatGPT assistance could potentially bolster human creativity by facilitating content delivery or providing useful counterpoint ideas. We also significantly advance scholarship on understanding experience of GenAI, demonstrating that bypassing the cognitive effort required for creativity by using ChatGPT could be harmful to the creative process and experience of creative tasks, especially when steps are not taken to address the use of AI in a transparent manner. Finally, our novel mixed-method study design offers a contribution to the methodological frameworks for the study of the effects and experience of GenAI. We discuss the study results in relation to implications for educational practices and social policy and argue that our results support recommending an integration of generative AI into higher education alongside practices that help to mitigate the negative impacts of AI use on student experience.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100140"},"PeriodicalIF":0.0,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143739925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}