Avanti Bhandarkar, Ronald Wilson, Anushka Swarup, Gregory D. Webster, Damon Woodard
{"title":"Bridging minds and machines: Unmasking the limits in text-based automatic personality recognition for enhanced psychology–AI synergy","authors":"Avanti Bhandarkar, Ronald Wilson, Anushka Swarup, Gregory D. Webster, Damon Woodard","doi":"10.1111/bjop.12755","DOIUrl":"10.1111/bjop.12755","url":null,"abstract":"<p>Text-based automatic personality recognition (APR) operates at the intersection of artificial intelligence (AI) and psychology to determine the personality of an individual from their text sample. This covert form of personality assessment is key for a variety of online applications that contribute to individual convenience and well-being such as that of chatbots and personal assistants. Despite the availability of good quality data utilizing state-of-the-art AI methods, the reported performance of these recognition systems remains below expectations in comparable areas. Consequently, this work investigates and identifies the source of this performance limit and attributes it to the flawed assumptions of text-based APR. These insights are obtained via a large-scale comprehensive benchmark and analysis of text data from five corpora with diverse characteristics and complementary personality models (Big Five and Dark Triad) applied to an assortment of AI methods ranging from hand-crafted linguistic features to data-driven transformers. Finally, the work concludes by identifying the open problems that can help navigate the limitations in text-based automatic personality recognition to a great extent.</p>","PeriodicalId":9300,"journal":{"name":"British journal of psychology","volume":"117 2","pages":"702-724"},"PeriodicalIF":3.3,"publicationDate":"2026-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142845874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computers and chess masters: The role of AI in transforming elite human performance","authors":"Merim Bilalić, Mario Graf, Nemanja Vaci","doi":"10.1111/bjop.12750","DOIUrl":"10.1111/bjop.12750","url":null,"abstract":"<p>Advances in Artificial Intelligence (AI) have made significant strides in recent years, often supplementing rather than replacing human performance. The extent of their assistance at the highest levels of human performance remains unclear. We analyse over 11.6 million decisions of elite chess players, a domain commonly used as a testbed for AI and psychology due to its complexity and objective assessment. We investigated the impact of two AI chess revolutions: the first in the late 1990s with the rise of powerful PCs and internet access and the second in the late 2010s with deep learning-powered chess engines. The rate of human improvement mirrored AI advancements, but contrary to expectations, the quality of decisions mostly improved steadily over four decades, irrespective of age, with no distinct periods of rapid improvement. Only the youngest top players saw marked gains in the late 1990s, likely due to better access to knowledge and computers. Surprisingly, the recent wave of neural network-powered engines has not significantly impacted the best players – at least, not yet. Our research highlights AI's potential to enhance human capability in complex tasks, given the right conditions, even among the most elite performers.</p>","PeriodicalId":9300,"journal":{"name":"British journal of psychology","volume":"117 2","pages":"585-609"},"PeriodicalIF":3.3,"publicationDate":"2026-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13051028/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142784019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
James K. He, Felix P. S. Wallis, Andrés Gvirtz, Steve Rathje
{"title":"Artificial intelligence chatbots mimic human collective behaviour","authors":"James K. He, Felix P. S. Wallis, Andrés Gvirtz, Steve Rathje","doi":"10.1111/bjop.12764","DOIUrl":"10.1111/bjop.12764","url":null,"abstract":"<p>Artificial Intelligence (AI) chatbots, such as ChatGPT, have been shown to mimic <i>individual</i> human behaviour in a wide range of psychological and economic tasks. Do groups of AI chatbots also mimic <i>collective</i> behaviour? If so, artificial societies of AI chatbots may aid social scientific research by simulating human collectives. To investigate this theoretical possibility, we focus on whether AI chatbots natively mimic one commonly observed collective behaviour: <i>homophily</i>, people's tendency to form communities with similar others. In a large simulated online society of AI chatbots powered by large language models (<i>N</i> = 33,299), we find that communities form over time around bots using a common language. In addition, among chatbots that predominantly use English (<i>N</i> = 17,746), communities emerge around bots that post similar content. These initial empirical findings suggest that AI chatbots mimic homophily, a key aspect of human collective behaviour. Thus, in addition to simulating individual human behaviour, AI-powered artificial societies may advance social science research by allowing researchers to simulate nuanced aspects of <i>collective</i> behaviour.</p>","PeriodicalId":9300,"journal":{"name":"British journal of psychology","volume":"117 2","pages":"761-776"},"PeriodicalIF":3.3,"publicationDate":"2026-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13051051/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142909342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The use of AI in psychology: A historical perspective","authors":"Alice J. O'Toole, Elliot A. Ludvig","doi":"10.1111/bjop.70061","DOIUrl":"10.1111/bjop.70061","url":null,"abstract":"<p>Psychology and AI have a long and interconnected history that dates from Turing's famous query: ‘Can machines think?’ Since that time, insights into human perception, cognition, language and intelligence have passed between these fields in both directions. Psychological phenomena have fuelled the development of AI, and in parallel, the failures/successes of AI have informed theoretical models of psychological phenomena. In the past decade, the pace of this exchange has quickened, along with AI's impressive gains in achieving human-like feats of intelligence. This Special Issue examines the use of artificial intelligence in psychological research and covers a wide range of topics including: Explainable AI, the development of computational models of psychological processes, the nature of human interactions with AI and the use of AI as a creative and powerful tool for psychological research. Studies of Explainable AI aim to understand the decisions and actions of an AI in human terms. AI-based models of human perception, cognition, and language can ground theories of these processes and can be manipulated and used in hypothesis testing. Studying human interactions with AI can provide a window into the mental models we form of other types of intelligent systems. At the level of social interaction, psychologists can ask whether and how AI is changing human behaviour, both in the near- and far-term. In this Special Issue, we see examples of research aimed at each of these questions. This guest editorial provides a brief history of how psychology and AI have evolved to arrive at this point in time. We also provide an overview of the diverse contents of this issue. These papers give a glimpse of the next chapter in the co-evolution of AI and psychology.</p>","PeriodicalId":9300,"journal":{"name":"British journal of psychology","volume":"117 2","pages":"433-443"},"PeriodicalIF":3.3,"publicationDate":"2026-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://bpspsychub.onlinelibrary.wiley.com/doi/epdf/10.1111/bjop.70061","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146050395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Keep bright in the dark: Multimodal emotional effects on donation-based crowdfunding performance and their empathic mechanisms","authors":"Rui Guo, Guolong Wang, Ding Wu, Zhen Wu","doi":"10.1111/bjop.12774","DOIUrl":"10.1111/bjop.12774","url":null,"abstract":"<p>How to raise donations effectively, especially in the E-era, has puzzled fundraisers and scientists across various disciplines. Our research focuses on donation-based crowdfunding projects and investigates how the emotional valence expressed verbally (in textual descriptions) and visually (in facial images) in project descriptions affects project performance. Study 1 uses field data (<i>N</i> = 3817), grabs project information and descriptions from a top donation-based crowdfunding platform, computes visual and verbal emotional valence using a deep-learning-based affective computing method and analyses how multimodal emotional valence influences donation outcomes. Study 2 conducts experiments in GPT-4 (Study 2a, <i>N</i> = 400) and humans (Study 2b, <i>N</i> = 240), manipulates the project's visual and verbal emotional valence through AI-generated stimuli and then assesses donation decisions (both GPT-4 and humans) and corresponding state empathy (humans). The results indicate a multimodal positivity superiority effect: both visual and verbal emotional valence promote initial whether-to-donate decisions, whereas only verbal emotional valence further promotes the how-much-to-donate decisions. Notably, such multimodal emotional effects can be explained through different mediating paths of empathic concern and empathic hopefulness. The current study theoretically facilitates our understanding of the emotional motivations underlying human prosociality and provides insights into crafting impactful advertisements for online donations.</p>","PeriodicalId":9300,"journal":{"name":"British journal of psychology","volume":"117 2","pages":"610-635"},"PeriodicalIF":3.3,"publicationDate":"2026-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143051786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The differences in essential facial areas for impressions between humans and deep learning models: An eye-tracking and explainable AI approach","authors":"Takanori Sano, Jun Shi, Hideaki Kawabata","doi":"10.1111/bjop.12744","DOIUrl":"10.1111/bjop.12744","url":null,"abstract":"<p>This study explored the facial impressions of attractiveness, dominance and sexual dimorphism using experimental and computational methods. In Study 1, we generated face images with manipulated morphological features using geometric morphometrics. In Study 2, we conducted eye tracking and impression evaluation experiments using these images to examine how facial features influence impression evaluations and explored differences based on the sex of the face images and participants. In Study 3, we employed deep learning methods, specifically using gradient-weighted class activation mapping (Grad-CAM), an explainable artificial intelligence (AI) technique, to extract important features for each impression using the face images and impression evaluation results from Studies 1 and 2. The findings revealed that eye-tracking and deep learning use different features as cues. In the eye-tracking experiments, attention was focused on features such as the eyes, nose and mouth, whereas the deep learning analysis highlighted broader features, including eyebrows and superciliary arches. The computational approach using explainable AI suggests that the determinants of facial impressions can be extracted independently of visual attention.</p>","PeriodicalId":9300,"journal":{"name":"British journal of psychology","volume":"117 2","pages":"503-527"},"PeriodicalIF":3.3,"publicationDate":"2026-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13051033/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142495538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The developmental trajectories of working memory updating from early childhood to adolescence: A meta-analysis.","authors":"Ye Song, Chen Cheng","doi":"10.1111/bjop.70069","DOIUrl":"https://doi.org/10.1111/bjop.70069","url":null,"abstract":"<p><p>Working memory updating is a crucial cognitive function for learning and academic achievement that develops significantly throughout childhood and adolescence. Despite the varieties of existing tasks to measure children's working memory updating, its overall developmental trajectory and task-specific developmental patterns remain inadequately understood. This meta-analysis examined 99 studies (N = 35,858 participants) on working memory updating performance in individuals aged 3 to 17 years, using a range of updating paradigms. Results revealed three key findings. Firstly, a significant positive developmental trend with the largest improvements was observed in early to middle childhood (ages 3-8) (d = 2.29). Secondly, meta-regression analyses revealed that while both linear and quadratic models adequately described the developmental trajectory, the quadratic model provides superior fit, indicating steeper improvements in early childhood that gradually level off in adolescence. Thirdly, task-specific analyses demonstrated distinct developmental patterns: backward recall tasks exhibited the strongest age-related improvement (β = .21), whereas n-back and selective updating tasks showed relatively flat trajectories. Together these findings suggest that working memory updating follows a curvilinear developmental progression with substantial task-specific variations. This comprehensive analysis provides valuable insights for understanding the development of working memory updating and practical implications for age-appropriate cognitive function measures.</p>","PeriodicalId":9300,"journal":{"name":"British journal of psychology","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2026-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147622026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Melina Mueller, Peter J. B. Hancock, Emily K. Cunningham, Roger J. Watt, Daniel Carragher, Anna K. Bobak
{"title":"Automated face recognition assists with low-prevalence face identity mismatches but can bias users","authors":"Melina Mueller, Peter J. B. Hancock, Emily K. Cunningham, Roger J. Watt, Daniel Carragher, Anna K. Bobak","doi":"10.1111/bjop.12745","DOIUrl":"10.1111/bjop.12745","url":null,"abstract":"<p>We present three experiments to study the effects of giving information about the decision of an automated face recognition (AFR) system to participants attempting to decide whether two face images show the same person. We make three contributions designed to make our results applicable to real-word use: participants are given the true response of a highly accurate AFR system; the face set reflects the mixed ethnicity of the city of London from where participants are drawn; and there are only 10% of mismatches. Participants were equally accurate when given the similarity score of the AFR system or just the binary decision but shifted their bias towards match and were over-confident on difficult pairs when given only binary information. No participants achieved the 100% accuracy of the AFR system, and they had only weak insight about their own performance.</p>","PeriodicalId":9300,"journal":{"name":"British journal of psychology","volume":"117 2","pages":"567-584"},"PeriodicalIF":3.3,"publicationDate":"2026-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13051006/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142638407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Added value of AI for psychology or added value of psychology for AI?","authors":"Marc Brysbaert","doi":"10.1111/bjop.70046","DOIUrl":"10.1111/bjop.70046","url":null,"abstract":"<p>In this commentary, I express my concern that the special issue focuses too much on the added value of AI for psychology, while psychological research also has much to offer, such as the operationalization of variables based on theory, validation tools and the statistical evaluation of information generated by AI systems.</p>","PeriodicalId":9300,"journal":{"name":"British journal of psychology","volume":"117 2","pages":"777-780"},"PeriodicalIF":3.3,"publicationDate":"2026-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145676602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ruoxi Qi, Yueyuan Zheng, Yi Yang, Caleb Chen Cao, Janet H. Hsiao
{"title":"Explanation strategies in humans versus current explainable artificial intelligence: Insights from image classification","authors":"Ruoxi Qi, Yueyuan Zheng, Yi Yang, Caleb Chen Cao, Janet H. Hsiao","doi":"10.1111/bjop.12714","DOIUrl":"10.1111/bjop.12714","url":null,"abstract":"<p>Explainable AI (XAI) methods provide explanations of AI models, but our understanding of how they compare with human explanations remains limited. Here, we examined human participants' attention strategies when classifying images and when explaining how they classified the images through eye-tracking and compared their attention strategies with saliency-based explanations from current XAI methods. We found that humans adopted more explorative attention strategies for the explanation task than the classification task itself. Two representative explanation strategies were identified through clustering: One involved focused visual scanning on foreground objects with more conceptual explanations, which contained more specific information for inferring class labels, whereas the other involved explorative scanning with more visual explanations, which were rated higher in effectiveness for early category learning. Interestingly, XAI saliency map explanations had the highest similarity to the explorative attention strategy in humans, and explanations highlighting discriminative features from invoking observable causality through perturbation had higher similarity to human strategies than those highlighting internal features associated with higher class score. Thus, humans use both visual and conceptual information during explanation, which serve different purposes, and XAI methods that highlight features informing observable causality match better with human explanations, potentially more accessible to users.</p>","PeriodicalId":9300,"journal":{"name":"British journal of psychology","volume":"117 2","pages":"479-502"},"PeriodicalIF":3.3,"publicationDate":"2026-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13051005/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141300073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}