AI & SocietyPub Date : 2026-04-06DOI: 10.1007/s00146-026-03022-9
David J. Gunkel
{"title":"Persuasive machines: large language models and the art of rhetoric","authors":"David J. Gunkel","doi":"10.1007/s00146-026-03022-9","DOIUrl":"10.1007/s00146-026-03022-9","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 4","pages":"2739 - 2740"},"PeriodicalIF":4.7,"publicationDate":"2026-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147752521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2026-03-21DOI: 10.1007/s00146-026-02969-z
Didar Zowghi, Muneera Bano
{"title":"Emerging roles and trends of equity, diversity, and inclusion in artificial intelligence","authors":"Didar Zowghi, Muneera Bano","doi":"10.1007/s00146-026-02969-z","DOIUrl":"10.1007/s00146-026-02969-z","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 4","pages":"2741 - 2748"},"PeriodicalIF":4.7,"publicationDate":"2026-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147752522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2026-03-01Epub Date: 2025-10-09DOI: 10.1007/s00146-025-02644-9
Jemima Winifred Allen, Ivar Rodríguez Hannikainen, Julian Savulescu, Dominic Wilkinson, Brian David Earp
{"title":"Is <i>Consent-GPT</i> valid? Public attitudes to generative AI use in surgical consent.","authors":"Jemima Winifred Allen, Ivar Rodríguez Hannikainen, Julian Savulescu, Dominic Wilkinson, Brian David Earp","doi":"10.1007/s00146-025-02644-9","DOIUrl":"10.1007/s00146-025-02644-9","url":null,"abstract":"<p><p>Healthcare systems often delegate surgical consent-seeking to members of the treating team other than the surgeon (e.g., junior doctors in the UK and Australia). Yet, little is known about public attitudes toward this practice compared to emerging AI-supported options. This first large-scale empirical study examines how laypeople evaluate the validity and liability risks of using an AI-supported surgical consent system (<i>Consent-GPT</i>). We randomly assigned 376 UK participants (demographically representative for age, ethnicity, and gender) to evaluate identical transcripts of surgical consent interviews framed as being conducted by either <i>Consent-GPT</i>, a junior doctor, or the treating surgeon. Participants broadly agreed that AI-supported consent was valid (87.6% agreement), but rated it significantly lower than consent sought solely by human clinicians (treating surgeon: 97.6% agreement; junior doctor: 96.2%). Participants expressed substantially lower satisfaction with AI-supported consent compared to human-only processes (<i>Consent-GPT</i>: 59.5% satisfied; treating surgeon 96.8%; junior doctor: 93.1%), despite identical consent interactions (i.e., the same informational content and display format). Regarding justification to sue the hospital following a complication, participants were slightly more inclined to support legal action in response to AI-supported consent than human-only consent. However, the strongest predictor was proper risk disclosure, not the consent-seeking agent. As AI integration in healthcare accelerates, these results highlight critical considerations for implementation strategies, suggesting that a hybrid approach to consent delegation that leverages AI's information sharing capabilities while preserving meaningful human engagement may be more acceptable to patients than an otherwise identical process with relatively less human-to-human interaction.</p>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":" ","pages":"2637-2655"},"PeriodicalIF":4.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7618318/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145446200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2026-01-21DOI: 10.1007/s00146-025-02718-8
Tanja Kubes
{"title":"Correction: A Critique of Human-Centred AI: A Plea for a Feminist AI Framework (FAIF)","authors":"Tanja Kubes","doi":"10.1007/s00146-025-02718-8","DOIUrl":"10.1007/s00146-025-02718-8","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 4","pages":"2839 - 2839"},"PeriodicalIF":4.7,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02718-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147752477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2026-01-13DOI: 10.1007/s00146-026-02859-4
Steven Watson, Satinder P. Gill, Donghee Shin, Manh-Tung Ho
{"title":"Reflexive ecologies of knowledge in the future of AI & Society","authors":"Steven Watson, Satinder P. Gill, Donghee Shin, Manh-Tung Ho","doi":"10.1007/s00146-026-02859-4","DOIUrl":"10.1007/s00146-026-02859-4","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"1 - 3"},"PeriodicalIF":4.7,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146098999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2026-01-01Epub Date: 2025-11-21DOI: 10.1007/s00146-025-02741-9
Joshua Kelsall, Xingwei Tan, Aislinn Bergin, Jiahong Chen, Maria Waheed, Tom Sorell, Rob Procter, Maria Liakata, Jenny Chim, Serene Chi
{"title":"A rapid evidence review of evaluation techniques for large language models in legal use cases: trends, gaps, and recommendations for future research.","authors":"Joshua Kelsall, Xingwei Tan, Aislinn Bergin, Jiahong Chen, Maria Waheed, Tom Sorell, Rob Procter, Maria Liakata, Jenny Chim, Serene Chi","doi":"10.1007/s00146-025-02741-9","DOIUrl":"https://doi.org/10.1007/s00146-025-02741-9","url":null,"abstract":"<p><p>The legal profession faces mounting pressures, including case backlogs and limited access to legal services. Large language models (LLMs), such as OpenAI's GPT series, have been touted as potential solutions, promising to streamline tasks such as legal drafting, summarisation, analysis, and advice. Proponents argue these models can enhance efficiency, accuracy, and access to justice. However, significant risks remain. LLMs are prone to bias, factual hallucinations, and opaque reasoning processes, which can have severe consequences in high-stakes legal contexts. For responsible use in law, legal use cases must be accurately operationalised into LLM tasks that are sensitive to legal settings, as do the evaluation metrics used to evaluate LLMs performing those tasks. This paper presents a rapid literature review of LLM research in legal contexts since ChatGPT-4's release in March 2023. We examine how legal tasks are operationalised for LLMs and what evaluation metrics are used, with a focus on how these align-or fail to align-with real-world legal practice. We argue that existing studies often overlook the institutional, organisational, and professional contexts in which these tools would be deployed. This oversight limits the practical relevance of current evaluations and proposes directions for more contextually grounded research and responsible deployment strategies.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s00146-025-02741-9.</p>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 4","pages":"4025-4043"},"PeriodicalIF":4.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13124847/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147822056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2026-01-01Epub Date: 2025-11-14DOI: 10.1007/s00146-025-02731-x
Venetia Brown, Retno Larasati, Joseph Kwarteng, Tracie Farrell
{"title":"Understanding AI and power: situated perspectives from Global North and South practitioners.","authors":"Venetia Brown, Retno Larasati, Joseph Kwarteng, Tracie Farrell","doi":"10.1007/s00146-025-02731-x","DOIUrl":"https://doi.org/10.1007/s00146-025-02731-x","url":null,"abstract":"<p><p>Global debates on artificial intelligence (AI) ethics and governance remain dominated by high-income, AI-intensive nations, marginalizing perspectives from low- and middle-income countries and minoritized practitioners. This qualitative study adopts a decolonial and sociotechnical lens to examine how AI practitioners across Africa, Asia, South America, the Caribbean, and minoritized groups working in high-income contexts conceptualize AI's value, harms, and governance. Drawing on reflexive thematic analysis of 22 in-depth interviews, the study explores how geographic, cultural, and professional contexts shape practitioners' understandings of ethics, harm, and power within the global AI ecosystem. Findings reveal a dual orientation. While some participants view AI as a neutral tool shaped by human intent, others frame it as a sociotechnical system that reproduces structural inequities through data colonialism, exclusion, and epistemic dependency. Despite these asymmetries, participants articulated cautious yet agentic imaginaries of AI's potential to address local and regional problems in healthcare, education, and public governance. The study advances decolonial AI ethics by empirically grounding how ethical reasoning and governance are negotiated under constraint and by highlighting pathways toward more equitable, context-sensitive global AI governance.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s00146-025-02731-x.</p>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 4","pages":"3981-3996"},"PeriodicalIF":4.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13124877/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147822065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}