AI & Society最新文献

筛选
英文 中文
Persuasive machines: large language models and the art of rhetoric 有说服力的机器:大型语言模型和修辞艺术
IF 4.7
AI & Society Pub Date : 2026-04-06 DOI: 10.1007/s00146-026-03022-9
David J. Gunkel
{"title":"Persuasive machines: large language models and the art of rhetoric","authors":"David J. Gunkel","doi":"10.1007/s00146-026-03022-9","DOIUrl":"10.1007/s00146-026-03022-9","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 4","pages":"2739 - 2740"},"PeriodicalIF":4.7,"publicationDate":"2026-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147752521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conversing with machines 与机器对话
IF 4.7
AI & Society Pub Date : 2026-04-01 DOI: 10.1007/s00146-026-02992-0
Satinder P. Gill
{"title":"Conversing with machines","authors":"Satinder P. Gill","doi":"10.1007/s00146-026-02992-0","DOIUrl":"10.1007/s00146-026-02992-0","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 3","pages":"1635 - 1639"},"PeriodicalIF":4.7,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147727299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Emerging roles and trends of equity, diversity, and inclusion in artificial intelligence 人工智能中公平性、多样性和包容性的新角色和趋势
IF 4.7
AI & Society Pub Date : 2026-03-21 DOI: 10.1007/s00146-026-02969-z
Didar Zowghi, Muneera Bano
{"title":"Emerging roles and trends of equity, diversity, and inclusion in artificial intelligence","authors":"Didar Zowghi, Muneera Bano","doi":"10.1007/s00146-026-02969-z","DOIUrl":"10.1007/s00146-026-02969-z","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 4","pages":"2741 - 2748"},"PeriodicalIF":4.7,"publicationDate":"2026-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147752522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is Consent-GPT valid? Public attitudes to generative AI use in surgical consent. 同意- gpt有效吗?公众对在手术同意中使用生成人工智能的态度。
IF 4.7
AI & Society Pub Date : 2026-03-01 Epub Date: 2025-10-09 DOI: 10.1007/s00146-025-02644-9
Jemima Winifred Allen, Ivar Rodríguez Hannikainen, Julian Savulescu, Dominic Wilkinson, Brian David Earp
{"title":"Is <i>Consent-GPT</i> valid? Public attitudes to generative AI use in surgical consent.","authors":"Jemima Winifred Allen, Ivar Rodríguez Hannikainen, Julian Savulescu, Dominic Wilkinson, Brian David Earp","doi":"10.1007/s00146-025-02644-9","DOIUrl":"10.1007/s00146-025-02644-9","url":null,"abstract":"<p><p>Healthcare systems often delegate surgical consent-seeking to members of the treating team other than the surgeon (e.g., junior doctors in the UK and Australia). Yet, little is known about public attitudes toward this practice compared to emerging AI-supported options. This first large-scale empirical study examines how laypeople evaluate the validity and liability risks of using an AI-supported surgical consent system (<i>Consent-GPT</i>). We randomly assigned 376 UK participants (demographically representative for age, ethnicity, and gender) to evaluate identical transcripts of surgical consent interviews framed as being conducted by either <i>Consent-GPT</i>, a junior doctor, or the treating surgeon. Participants broadly agreed that AI-supported consent was valid (87.6% agreement), but rated it significantly lower than consent sought solely by human clinicians (treating surgeon: 97.6% agreement; junior doctor: 96.2%). Participants expressed substantially lower satisfaction with AI-supported consent compared to human-only processes (<i>Consent-GPT</i>: 59.5% satisfied; treating surgeon 96.8%; junior doctor: 93.1%), despite identical consent interactions (i.e., the same informational content and display format). Regarding justification to sue the hospital following a complication, participants were slightly more inclined to support legal action in response to AI-supported consent than human-only consent. However, the strongest predictor was proper risk disclosure, not the consent-seeking agent. As AI integration in healthcare accelerates, these results highlight critical considerations for implementation strategies, suggesting that a hybrid approach to consent delegation that leverages AI's information sharing capabilities while preserving meaningful human engagement may be more acceptable to patients than an otherwise identical process with relatively less human-to-human interaction.</p>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":" ","pages":"2637-2655"},"PeriodicalIF":4.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7618318/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145446200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The risky success of a mindless automatism 无意识的自动行为的冒险成功
IF 4.7
AI & Society Pub Date : 2026-02-02 DOI: 10.1007/s00146-026-02884-3
Massimo Negrotti
{"title":"The risky success of a mindless automatism","authors":"Massimo Negrotti","doi":"10.1007/s00146-026-02884-3","DOIUrl":"10.1007/s00146-026-02884-3","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 2","pages":"769 - 774"},"PeriodicalIF":4.7,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147335986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction: A Critique of Human-Centred AI: A Plea for a Feminist AI Framework (FAIF) 更正:对以人为中心的AI的批判:对女权主义AI框架(FAIF)的请求
IF 4.7
AI & Society Pub Date : 2026-01-21 DOI: 10.1007/s00146-025-02718-8
Tanja Kubes
{"title":"Correction: A Critique of Human-Centred AI: A Plea for a Feminist AI Framework (FAIF)","authors":"Tanja Kubes","doi":"10.1007/s00146-025-02718-8","DOIUrl":"10.1007/s00146-025-02718-8","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 4","pages":"2839 - 2839"},"PeriodicalIF":4.7,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-025-02718-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147752477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reflexive ecologies of knowledge in the future of AI & Society 人工智能与社会未来的知识反思生态
IF 4.7
AI & Society Pub Date : 2026-01-13 DOI: 10.1007/s00146-026-02859-4
Steven Watson, Satinder P. Gill, Donghee Shin, Manh-Tung Ho
{"title":"Reflexive ecologies of knowledge in the future of AI & Society","authors":"Steven Watson,&nbsp;Satinder P. Gill,&nbsp;Donghee Shin,&nbsp;Manh-Tung Ho","doi":"10.1007/s00146-026-02859-4","DOIUrl":"10.1007/s00146-026-02859-4","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"1 - 3"},"PeriodicalIF":4.7,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146098999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI and clichés 人工智能和陈词滥调
IF 4.7
AI & Society Pub Date : 2026-01-04 DOI: 10.1007/s00146-025-02729-5
Nana Ariel, Dana Riesenfeld
{"title":"AI and clichés","authors":"Nana Ariel,&nbsp;Dana Riesenfeld","doi":"10.1007/s00146-025-02729-5","DOIUrl":"10.1007/s00146-025-02729-5","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 4","pages":"3205 - 3217"},"PeriodicalIF":4.7,"publicationDate":"2026-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147752519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A rapid evidence review of evaluation techniques for large language models in legal use cases: trends, gaps, and recommendations for future research. 对法律用例中大型语言模型的评估技术的快速证据回顾:趋势、差距和对未来研究的建议。
IF 4.7
AI & Society Pub Date : 2026-01-01 Epub Date: 2025-11-21 DOI: 10.1007/s00146-025-02741-9
Joshua Kelsall, Xingwei Tan, Aislinn Bergin, Jiahong Chen, Maria Waheed, Tom Sorell, Rob Procter, Maria Liakata, Jenny Chim, Serene Chi
{"title":"A rapid evidence review of evaluation techniques for large language models in legal use cases: trends, gaps, and recommendations for future research.","authors":"Joshua Kelsall, Xingwei Tan, Aislinn Bergin, Jiahong Chen, Maria Waheed, Tom Sorell, Rob Procter, Maria Liakata, Jenny Chim, Serene Chi","doi":"10.1007/s00146-025-02741-9","DOIUrl":"https://doi.org/10.1007/s00146-025-02741-9","url":null,"abstract":"<p><p>The legal profession faces mounting pressures, including case backlogs and limited access to legal services. Large language models (LLMs), such as OpenAI's GPT series, have been touted as potential solutions, promising to streamline tasks such as legal drafting, summarisation, analysis, and advice. Proponents argue these models can enhance efficiency, accuracy, and access to justice. However, significant risks remain. LLMs are prone to bias, factual hallucinations, and opaque reasoning processes, which can have severe consequences in high-stakes legal contexts. For responsible use in law, legal use cases must be accurately operationalised into LLM tasks that are sensitive to legal settings, as do the evaluation metrics used to evaluate LLMs performing those tasks. This paper presents a rapid literature review of LLM research in legal contexts since ChatGPT-4's release in March 2023. We examine how legal tasks are operationalised for LLMs and what evaluation metrics are used, with a focus on how these align-or fail to align-with real-world legal practice. We argue that existing studies often overlook the institutional, organisational, and professional contexts in which these tools would be deployed. This oversight limits the practical relevance of current evaluations and proposes directions for more contextually grounded research and responsible deployment strategies.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s00146-025-02741-9.</p>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 4","pages":"4025-4043"},"PeriodicalIF":4.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13124847/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147822056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding AI and power: situated perspectives from Global North and South practitioners. 理解人工智能和权力:来自全球南北从业者的视角。
IF 4.7
AI & Society Pub Date : 2026-01-01 Epub Date: 2025-11-14 DOI: 10.1007/s00146-025-02731-x
Venetia Brown, Retno Larasati, Joseph Kwarteng, Tracie Farrell
{"title":"Understanding AI and power: situated perspectives from Global North and South practitioners.","authors":"Venetia Brown, Retno Larasati, Joseph Kwarteng, Tracie Farrell","doi":"10.1007/s00146-025-02731-x","DOIUrl":"https://doi.org/10.1007/s00146-025-02731-x","url":null,"abstract":"<p><p>Global debates on artificial intelligence (AI) ethics and governance remain dominated by high-income, AI-intensive nations, marginalizing perspectives from low- and middle-income countries and minoritized practitioners. This qualitative study adopts a decolonial and sociotechnical lens to examine how AI practitioners across Africa, Asia, South America, the Caribbean, and minoritized groups working in high-income contexts conceptualize AI's value, harms, and governance. Drawing on reflexive thematic analysis of 22 in-depth interviews, the study explores how geographic, cultural, and professional contexts shape practitioners' understandings of ethics, harm, and power within the global AI ecosystem. Findings reveal a dual orientation. While some participants view AI as a neutral tool shaped by human intent, others frame it as a sociotechnical system that reproduces structural inequities through data colonialism, exclusion, and epistemic dependency. Despite these asymmetries, participants articulated cautious yet agentic imaginaries of AI's potential to address local and regional problems in healthcare, education, and public governance. The study advances decolonial AI ethics by empirically grounding how ethical reasoning and governance are negotiated under constraint and by highlighting pathways toward more equitable, context-sensitive global AI governance.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s00146-025-02731-x.</p>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 4","pages":"3981-3996"},"PeriodicalIF":4.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13124877/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147822065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书