{"title":"Distinct patterns of social media use in later life: Digitally mediated social environments and a dual-association pattern of human vulnerability","authors":"Yuna Seo, Yuki Nakada","doi":"10.1016/j.chbah.2026.100313","DOIUrl":"10.1016/j.chbah.2026.100313","url":null,"abstract":"<div><div>As digitally mediated social environments increasingly structure everyday interaction, understanding how such artificial environments relate to human vulnerability has become a critical challenge. This study examines how distinct patterns of social media engagement, active/expressive versus informational-communicative use, are associated with frailty among older adults, and whether psychological well-being and lifestyle behaviors operate as intervening associations rather than causal pathways. Survey data were collected from 963 community-dwelling Japanese adults aged 65–89. Exploratory factor analysis identified two dimensions of social media use. Logistic regression using a quartile-based frailty definition showed that informational-communicative social media use was associated with a higher likelihood of frailty, whereas active/expressive use was not. However, this direct association was not robust in sensitivity analyses using the established clinical Kihon Checklist cutoff; under this clinical criterion, informational-communicative use was no longer significantly associated with frailty and the odds ratio changed direction. In contrast, psychological well-being and healthy lifestyle behaviors were consistently associated with lower frailty across operationalizations. Mediation analyses conducted within a structural equation modeling framework suggested a modest indirect protective association through psychological well-being and healthier lifestyle behaviors. Taken together, the findings indicate a definition-sensitive and dual-association pattern, rather than a robust direct association between informational-communicative use and clinically defined frailty. Cluster analysis further demonstrated substantial heterogeneity in digital engagement, with high-use individuals exhibiting the highest frailty prevalence under the quartile-based frailty classification. These findings should be interpreted as a pattern of associations rather than causal pathways, as the cross-sectional design does not permit temporal inference; alternative explanations, including reverse causality, remain plausible. Nevertheless, the results suggest that artificially mediated social environments may be linked to heterogeneous patterns of human vulnerability in later life, particularly at the level of relative or subclinical vulnerability rather than clinically defined frailty. This nuanced association pattern underscores the importance of age-sensitive design and governance of digital social systems.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"8 ","pages":"Article 100313"},"PeriodicalIF":0.0,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147802893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Talking about mental health with AI-based digital personas: Understanding what users disclose","authors":"Nicole Carre , Shirin Aghakhani , Barbora Siposova , Michaela Slezák Polónyová , Tereza Pazderová , Eduardo L. Bunge","doi":"10.1016/j.chbah.2026.100311","DOIUrl":"10.1016/j.chbah.2026.100311","url":null,"abstract":"","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"8 ","pages":"Article 100311"},"PeriodicalIF":0.0,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147802987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jorge A. Ruiz-Vanoye , Francisco R. Trejo-Macotela , Ocotlán Diaz-Parra , Jaime Aguilar-Ortiz , Miguel A. Ruiz-Jaimes , Yadira Toledo-Navarro , Alejandro Fuentes-Penna , Ricardo A. Barrera-Cámara , Marco A. Vera-Jiménez
{"title":"AI and synthetic happiness in esports athletes and recreational gamers: Between code, competition, and well-being","authors":"Jorge A. Ruiz-Vanoye , Francisco R. Trejo-Macotela , Ocotlán Diaz-Parra , Jaime Aguilar-Ortiz , Miguel A. Ruiz-Jaimes , Yadira Toledo-Navarro , Alejandro Fuentes-Penna , Ricardo A. Barrera-Cámara , Marco A. Vera-Jiménez","doi":"10.1016/j.chbah.2026.100309","DOIUrl":"10.1016/j.chbah.2026.100309","url":null,"abstract":"<div><div>Artificial Intelligence (AI) is increasingly embedded in esports, shaping training, performance, and player experience. While much attention has been given to technical applications, less is known about how AI-mediated systems affect psychological well-being. This Perspective Article introduces the concept of <em>synthetic happiness</em>, defined as a technology-mediated form of subjective well-being that emerges through adaptive feedback, motivational regulation, and cognitive reappraisal in digital environments. We propose two conceptual models to situate synthetic happiness within established psychological theories: the <em>Synthetic Happiness Pyramid</em> and the <em>AI-Happiness Loop</em>. The Pyramid extends Maslow's hierarchy by incorporating AI-driven adaptation as a determinant of resilience, motivation, and flourishing in esports contexts. The Loop illustrates the dynamic cycle through which biometric and behavioural monitoring inform adaptive interventions, sustaining flow, supporting Self-Determination Theory's needs of autonomy, competence, and relatedness, and mitigating burnout risk. Beyond theory, we highlight ethical concerns related to biometric data privacy, cognitive autonomy, and youth protection, emphasizing the need for transparent and responsible design. By integrating sport psychology, motivation science, and AI ethics, this article outlines a research agenda for empirically testing synthetic happiness models and developing frameworks that ensure AI promotes—not undermines—long-term well-being in competitive digital sport.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"8 ","pages":"Article 100309"},"PeriodicalIF":0.0,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147802892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"When artificial minds negotiate: Dark personality and the Ultimatum Game in large language models","authors":"Vinícius Ferraz , Tamas Olah , Ratin Sazedul , Robert Schmidt , Christiane Schwieren","doi":"10.1016/j.chbah.2026.100281","DOIUrl":"10.1016/j.chbah.2026.100281","url":null,"abstract":"<div><div>Personality prompts reshape how Large Language Models propose offers in economic games—but not how they respond to them. We show this by assigning graded Dark Factor of Personality profiles to 17 LLMs in the Ultimatum Game and benchmarking their decisions against human data. As proposers, LLMs shifted from 91% fair offers at the lowest selfishness level to 17% at the highest, closely tracking human patterns but with steeper gradients. As responders, no such shift occurred: acceptance rates remained uniformly high (<span><math><mo>∼</mo></math></span>80%) regardless of personality, failing to reproduce the punishment dynamics observed in humans. This asymmetry is theoretically informative. When incentive structures are explicit, personality and framing effects are attenuated—and proposing an offer is inherently more ambiguous than responding to one. Most strikingly, personality prompts changed what responders <em>articulated</em> but not how they <em>chose</em>: model justifications showed systematic shifts in fairness language, yet behavioral output remained flat. This dissociation between stated reasoning and revealed behavior indicates that LLMs achieve linguistic compliance with personality prompts without corresponding motivational change—approximating human strategic behavior only where surface-level heuristics suffice.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100281"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147396361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Trust in AI news, AI literacy, and the mediating role of artificial intelligence attitudes: A longitudinal study across diverse societies","authors":"Manuel Goyanes , Sonja Utz , Homero Gil de Zúñiga","doi":"10.1016/j.chbah.2026.100279","DOIUrl":"10.1016/j.chbah.2026.100279","url":null,"abstract":"<div><div>The rise of generative artificial intelligence (AI) tools, such as ChatGPT, has introduced new ways for people to access and interact with news. Yet, little is known about the factors that shape citizens' trust in AI-generated news. Drawing on a two-wave cross-national panel survey conducted in the United States (<em>n</em> = 1815), Spain (<em>n</em> = 1811), and Chile (<em>n</em> = 1802), this study investigates how AI literacy and attitudes toward AI influence trust in AI-generated news over time. We propose and test a mediation model in which AI literacy affects news trust indirectly through attitudes toward AI. Results show that while AI literacy does not exert a direct influence on trust in AI-generated news, it fosters more positive attitudes toward AI, which in turn enhance trust. This indirect effect is consistent across all three countries, suggesting an attitudinal mechanism linking literacy and trust in AI news trust. Overall, our findings highlight the key role of positive attitudinal evaluations in shaping public trust in communicative AI and underscore that fostering trust requires both improving citizens’ understanding of AI and addressing their broader attitudes toward this technology.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100279"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147396347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"(work)Flow bots vs. No bots: Workflow dynamics and AI utilization in teams","authors":"Sean M. Fitzhugh","doi":"10.1016/j.chbah.2026.100264","DOIUrl":"10.1016/j.chbah.2026.100264","url":null,"abstract":"<div><div>Teams face an ongoing challenge managing workflow by aligning team members with tasks that must be completed. The increasing prevalence of AI-based team members has important implications for workflow, as humans and AIs have vastly different capabilities. However, it remains unclear how real-world teams incorporate AI-based team members into workflows, and how this shapes team behaviors such as communication, which may be needed to enhance explainability of AI decision-making. This study uses a sample of 22k human-AI teams and 113k human teams on GitHub over the course of a month, and represents their workflows as dynamic networks of discrete, timestamped interactions between team members and the actions they perform. A relational event model uncovered structural patterns underlying the workflow network dynamics for each team, and standardized coefficients from those models were used to estimate whether a team included an AI team member and the level of activity of AI team members. Results showed that human-AI teams’ workflows are more routinized, overlapping, decentralized, and dominated by a handful of disproportionately active individuals; these features were stronger in teams with higher levels of AI activity. By contrast, human teams were more structurally segmented, with relatively even workflows across team members. Additionally, human-AI teams engaged in more communication, although communication levels were higher in teams with less AI activity. Results show clear distinctions between the types of activities performed and the structure of workflows between human and human-AI teams, suggesting distinct trajectories for development of key states and processes.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100264"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147396448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eyal Rabin , Zohar Elyoseph , Rotem Israel-Fishelson , Adi Dali , Ravit Nussinson
{"title":"Do AI voices follow social nuances? The case of politeness and speech rate","authors":"Eyal Rabin , Zohar Elyoseph , Rotem Israel-Fishelson , Adi Dali , Ravit Nussinson","doi":"10.1016/j.chbah.2026.100256","DOIUrl":"10.1016/j.chbah.2026.100256","url":null,"abstract":"<div><div>Voice-based artificial intelligence is increasingly expected to adhere to human social conventions, but can it exhibit implicit cues that are not explicitly programmed? This study investigates whether state-of-the-art text-to-speech systems have internalized the human tendency to reduce speech rate to convey politeness - a non-obvious prosodic marker. We prompted 22 synthetic voices from two leading AI platforms (AI Studio and OpenAI) to read a fixed script under both “polite and formal” and “casual and informal” conditions and measured the resulting speech duration. Across both AI platforms, the polite prompt produced slower speech than the casual prompt with very large effect sizes, an effect that was statistically significant for all of AI Studio's voices and for a large majority of OpenAI's voices. A second study confirmed that these prosodic adjustments are perceptually salient to human listeners, who successfully distinguished between the intended polite and casual styles based on the AI's output. These results demonstrate that AI can implicitly replicate the statistical patterns of human communication, highlighting its emerging role as a social actor that can reinforce human social norms.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100256"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147396450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Prosodic cues strengthen human-AI voice boundaries: Listeners do not easily perceive human speakers and AI clones as the same person","authors":"Wenjun Chen , Marc D. Pell , Xiaoming Jiang","doi":"10.1016/j.chbah.2026.100261","DOIUrl":"10.1016/j.chbah.2026.100261","url":null,"abstract":"<div><div>Previous studies concluded that listeners struggle to discriminate AI from human voices, but these studies used monotone-like speech and did not examine prosodic expressiveness, a key advantage of human over AI speakers. This study explores whether prosodic expressiveness facilitates human-AI voice discrimination. We recorded human prosodic speech with confident and doubtful expressions, trained AI models to replicate these prosodic patterns, had AI models generate new sentences, and then had human speakers produce equivalent prosodic expressions for the same sentences. In Experiment 1, we had 48 listeners rate humanlikeness and perceived confidence in 11,808 audio samples, finding that AI speech was consistently rated as less humanlike regardless of prosody. We selected 768 audios (AI × human, confident × doubtful prosody) for Experiment 2, where 80 listeners completed an identity discrimination task, telling whether two sounds were from the same speaker. Bayesian modeling results revealed near-ceiling performance for human-human/AI-AI pairs, with inconsistent prosodies decreasing accuracy by ∼7%, while listeners do not easily categorize AI and human as sharing the same identity (∼54% accuracy when prosody matches, dropping to ∼36% when inconsistent). We observed accuracy–reaction time synchronization; in human–AI/AI–human pairs only, however, listeners relied less on distance cues when the two voices’ identities were distant beyond a certain threshold. Overall, we found that listeners perceive AI speech as lower in humanlikeness, and prosodic variation further promotes rejecting AI and human voices as sharing the same identity, indicating that human acceptance of AI voices as equivalent to human voices is limited.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100261"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147396451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A qualitative shift in AI capabilities: A “bitter lesson” for human-AI interaction research?","authors":"Kim Astor","doi":"10.1016/j.chbah.2026.100253","DOIUrl":"10.1016/j.chbah.2026.100253","url":null,"abstract":"<div><div>Richard Sutton's “Bitter Lesson” (2019) describes how prior knowledge-driven approaches to AI design were displaced by domain-general methods that scale with data and computation, rendering many prior insights obsolete. The technologies this shift enabled, including large language models with native multimodal capabilities, may now reshape human–AI interaction (HAI). Contemporary systems capable of fluid, expressive, open-ended interaction differ not only in degree but in kind from earlier technologies. This paper argues that HAI research may face its own “bitter lesson”: findings derived from interaction with earlier, technically constrained, systems cannot be assumed to generalize without replication and historical contextualization. It situates these developments in relation to methodological strategies and the challenges posed by rapid technological change, while highlighting how contemporary AI systems can offer a new lens on human connection and the expression of intelligence.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100253"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146188419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Wall-E vs. Terminator: The relationship between physical appearance and dimensions of mind perception","authors":"Yasmina Giebeler , Basil Wahn , Eva Wiese","doi":"10.1016/j.chbah.2026.100271","DOIUrl":"10.1016/j.chbah.2026.100271","url":null,"abstract":"<div><div>Social robots are increasingly integrated into everyday environments, yet effective human-robot interaction remains challenging as robots often fail to engage social cognition the same way human partners do. Here, we examined how physical features impact whether robots are being perceived as agents \"with a mind\". We used standardized images of 251 different robots from the Anthropomorphic Robot Database (ABOT) and assessed mind perception based on ratings provided by 300 human participants. Consistent with prior findings, robots were attributed more agency (to act and plan) than experience (to sense and feel). Body components significantly explained variance in perceived agency, especially in interaction with facial features (adj. R<sup>2</sup> = 0.57). For experience, variance was best explained by a combination of body components, face and surface details (adj. R<sup>2</sup> = 0.64). Increasing human-likeness boosted perceptions of both dimensions, but the trajectories differed: experience followed a cubic function, plateauing at medium levels of human-likeness, while agency followed a quartic function with a dip around 75% human-likeness, a pattern resembling the uncanny valley. Our results indicate that specific levels of agency and experience may not be achieved by mere increase of general human-likeness. Instead, when high agency is desired (e.g., for surgery robots), design should primarily emphasize body components, supported by facial component, whereas enhancing experience (e.g., for companion robots) is best achieved using a combination of all three components. Our ratings linking physical robot features to dimensions of mind perception are made available, offering a comprehensive and accessible resource for experimentally manipulating perceptions of agency and experience.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100271"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146188418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}