AI and ethics最新文献

筛选
英文 中文
AI governance and ethical frameworks for public-sector AI decision systems: an institutional operationalisation model 公共部门人工智能决策系统的人工智能治理和道德框架:一个机构运作模型
AI and ethics Pub Date : 2026-05-06 DOI: 10.1007/s43681-026-01143-x
Adeyinka Ologun, Ogechi M. Ikeakaonwu, Abiodun F. Ibidunmoye, Grace A. Eneano, Chijioke C. Chuwa, Lukman O. Kolawole
{"title":"AI governance and ethical frameworks for public-sector AI decision systems: an institutional operationalisation model","authors":"Adeyinka Ologun,&nbsp;Ogechi M. Ikeakaonwu,&nbsp;Abiodun F. Ibidunmoye,&nbsp;Grace A. Eneano,&nbsp;Chijioke C. Chuwa,&nbsp;Lukman O. Kolawole","doi":"10.1007/s43681-026-01143-x","DOIUrl":"10.1007/s43681-026-01143-x","url":null,"abstract":"<div>\u0000 \u0000 <p>Artificial intelligence is becoming part of ordinary administrative practice across the public sector. Systems used to prioritise cases, detect fraud, allocate resources, and support policy analysis are now embedded in many areas of government. Their growing use has sharpened long-standing concerns about opacity, fairness, accountability, and privacy. Although these concerns are well established in the literature, there is still limited understanding of how ethical commitments are translated into routine institutional practice. This paper addresses that gap by developing an institutional operationalisation model for AI governance in the public sector. The study draws on institutional theory and a mixed-methods design combining a structured review of the literature with survey data from 423 respondents and semi-structured interviews with 47 participants across government, technical, academic, civil society, private sector, and citizen groups. The analysis shows that privacy, bias, and limited transparency remain the most pressing concerns, but it also reveals a more fundamental issue: trust in public-sector AI depends less on technical optimism than on confidence in the institutions that govern these systems. Government and technical actors tend to emphasise efficiency and feasibility, whereas citizens and civil society respondents place greater weight on contestability, accountability, and procedural fairness. On this basis, the paper argues that the central challenge is not simply defining ethical principles, but embedding them in durable governance arrangements. It proposes a model that connects transparency, accountability, fairness, privacy, and participation to concrete institutional mechanisms such as audit procedures, documentation standards, oversight structures, impact assessments, and avenues for appeal. The paper contributes to scholarship on AI governance by shifting attention from principle-based ethics to the institutional conditions under which ethical claims become credible in practice.</p>\u0000 </div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147829649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human bias, machine bias: a cognitive lens on fair AI 人类偏见,机器偏见:公平人工智能的认知视角
AI and ethics Pub Date : 2026-05-06 DOI: 10.1007/s43681-026-01086-3
Nyida Gyal
{"title":"Human bias, machine bias: a cognitive lens on fair AI","authors":"Nyida Gyal","doi":"10.1007/s43681-026-01086-3","DOIUrl":"10.1007/s43681-026-01086-3","url":null,"abstract":"<div><p>Algorithmic bias in AI systems is generally considered a technical problem. This paper argues that cognitive theories of human cognitive bias are a useful tool for understanding and regulating AI system fairness, without positing that AI systems are literally cognitive systems. This paper draws on cognitive science and economics to outline how human cognitive heuristics and biases under bounded rationality can inform understanding of: functionally similar error patterns in AI systems; biased ways that humans understand or act on AI system output; and human economic or institutional interests that may reinforce particular biased socio-technical systems. This paper uses cognitive bias theory, including theories of heuristics and biases developed by Kahneman &amp; Tversky [18], to outline a system for understanding AI system “bias-like” failures. It argues that this system should be based on well-studied human cognitive decision phenomena, but that it should be distinguished by a discussion of how human cognitive systems are different from AI systems. This paper argues that AI systems should be designed to be “bias aware” but that this awareness should be used to strategically target particular patterns of unfairness. This paper also discusses some of the implications for AI system design that this perspective may suggest. It further discusses some of the limitations of this cognitive perspective.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147829650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting AI adoption in research: a hybrid SEM-systematic review through an ethical lens 预测人工智能在研究中的应用:从伦理角度进行的混合扫描电镜-系统回顾
AI and ethics Pub Date : 2026-05-06 DOI: 10.1007/s43681-026-01114-2
Aida Miralmasi, Mohsen Moradi
{"title":"Predicting AI adoption in research: a hybrid SEM-systematic review through an ethical lens","authors":"Aida Miralmasi,&nbsp;Mohsen Moradi","doi":"10.1007/s43681-026-01114-2","DOIUrl":"10.1007/s43681-026-01114-2","url":null,"abstract":"<div><p>The integration of artificial intelligence into research in the humanities and social sciences goes beyond a technological innovation and requires rethinking the relationship between scientific methodology, researcher identity, and ethical responsibility. Despite rapid advances of AI in the natural sciences, scholars in the humanities and social sciences continue to face an adaptation gap shaped by cognitive uncertainty and ethical tension. To address this challenge, the present study adopts a two-stage hybrid methodology grounded in a pragmatic paradigm. In the first stage, PRISMA-based systematic literature review synthesizing evidence from 82 core studies was conducted to develop an integrated conceptual model explaining researchers’ intention to adapt to AI. In the second stage, the model was empirically tested using covariance-based structural equation modeling (CB-SEM) on a transnational sample of 687 humanities and social sciences researchers from 15 countries. The findings suggest that adaptation to AI is driven less by technical efficiency alone and more by the alignment of AI tools with epistemological logics, social legitimacy within academic communities, and individual ethical commitments. AI is not approached merely as a functional instrument but as a methodological partner whose acceptance depends on compatibility with scholarly identity and normative expectations of scientific practice. Notably, the absence of a meaningful effect of research experience indicates that AI operates as a form of “skill reset,” challenging traditional experience-based hierarchies in academic research. Overall, this study proposes an ethically informed explanatory–prescriptive framework for understanding sustainable AI adoption in the humanities and social sciences.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147829651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Agentic literacy debt: a structural problem the AI literacy field has not yet named 代理识字债务:人工智能识字领域尚未命名的结构性问题
AI and ethics Pub Date : 2026-05-06 DOI: 10.1007/s43681-026-01152-w
Rohith Nama
{"title":"Agentic literacy debt: a structural problem the AI literacy field has not yet named","authors":"Rohith Nama","doi":"10.1007/s43681-026-01152-w","DOIUrl":"10.1007/s43681-026-01152-w","url":null,"abstract":"<div><p>Autonomous AI agents now plan, decide, and act on behalf of users across healthcare, financial services, and workplace contexts, often without step-by-step human approval. Existing AI literacy frameworks were built for a world in which humans evaluate AI outputs and decide whether to act; they have no vocabulary for the user who has delegated decision-making authority to an agent whose actions may not be observable, reversible, or controllable. This correspondence names the resulting problem agentic literacy debt: the accumulating societal deficit that grows when agentic AI systems are deployed at scale without corresponding literacy infrastructure. The debt compounds through three reinforcing channels (normalization of opaque delegation, multi-agent ecosystem complexity, and institutional path dependence), and it is incurred by the organizations that deploy agents but paid by the users, patients, and citizens on whose behalf the agents act. Evidence from healthcare, financial fraud, and global equity contexts suggests the gap is already consequential. The problem is structural, not a temporary lag that curriculum reform will close. It demands a reframing of AI literacy as a governance capability, not an evaluative one.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147829136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The mirror ethic: reflection, relation, and responsibility 镜像伦理:反思、关系、责任
AI and ethics Pub Date : 2026-05-06 DOI: 10.1007/s43681-026-01160-w
Cayman Lee
{"title":"The mirror ethic: reflection, relation, and responsibility","authors":"Cayman Lee","doi":"10.1007/s43681-026-01160-w","DOIUrl":"10.1007/s43681-026-01160-w","url":null,"abstract":"<div><p>Artificial intelligence has entered public life as a powerful reflective system—one that not only generates outputs but reveals the ethical architectures embedded in human cognition, institutions, and historical memory. Yet this encounter has unfolded without a shared moral framework, producing systems that amplify the very tensions societies have never resolved. This paper develops the Mirror Ethic, a normative framework for AI governance grounded in reflection, relation, and shared responsibility. It introduces the Unguided Fusion Reactor Problem, the argument that contemporary AI was released into society with less instruction and philosophical grounding than technologies of far lesser consequence. Through focused case analyses in high-stakes institutional domains, the paper shows how AI functions as a mirror, reflecting and intensifying inherited patterns of inequity, authority, and interpretation. Building on these analyses, the Mirror Ethic identifies five core obligations—history, diversity, uncertainty, constraint, and relational life—and articulates the conditions under which reflective intelligence can be responsibly integrated into social practice. The paper concludes by advancing a model of ethical stewardship and framing co-becoming not as technological destiny, but as ethical responsibility. Ethical AI begins not with controlling the mirror, but with confronting what it reveals.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147829144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Governing autonomous AI agents: regulatory models for ethical and safe deployment 管理自主人工智能代理:道德和安全部署的监管模型
AI and ethics Pub Date : 2026-05-06 DOI: 10.1007/s43681-026-01153-9
Anmol Anmol, Chhavi Rana
{"title":"Governing autonomous AI agents: regulatory models for ethical and safe deployment","authors":"Anmol Anmol,&nbsp;Chhavi Rana","doi":"10.1007/s43681-026-01153-9","DOIUrl":"10.1007/s43681-026-01153-9","url":null,"abstract":"<div><p>This paper investigates issues of governance in relation to the autonomous AI agents and suggests a conceptual governance approach MLRM. However, in contrast to conventional AI systems, autonomous agents are adaptive, goal-oriented, and independent in their decision-making behaviour, complicating the issues of accountability, safety, and regulation. The study follows the comparative conceptual analysis approach, which compares the available models of governance namely risk-based, principle-based, sector-specific, and audit-based, in five analytical dimensions that include accountability, adaption to the agentic systems, fairness integration, technical embedding and lifecycle oversight. The results are introduced in the form of analytical thoughts instead of empirical confirmation. The study shows that the current structures are partially effective but are not integrated through lifecycle governance and embedded technical controls. The proposed MLRM conceptually manages these gaps with the help of organizing governance at the legal, institutional, technical, and ethical levels. Instead of asserting empirically validated improvements, the research claims that the suggested model may support entirety in governance and successes the risk, especially in dealing with bias, accountability, emergent behaviour, and cross-jurisdictional crises. The article makes a contribution to the literature regarding AI governance by providing a conceptual framework that is organized into lifecycle and could be used to inform future empirical research and policies.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147829137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UX experts vs. AI: exploring the performance of large language models and humans on detecting dark patterns UX专家vs. AI:探索大型语言模型和人类在检测黑暗模式方面的表现
AI and ethics Pub Date : 2026-05-05 DOI: 10.1007/s43681-026-01151-x
Joshua Nwokeji, Makuochi Nkwo, Tochukwu Ikwunne, Meiyeer Yeerbo
{"title":"UX experts vs. AI: exploring the performance of large language models and humans on detecting dark patterns","authors":"Joshua Nwokeji,&nbsp;Makuochi Nkwo,&nbsp;Tochukwu Ikwunne,&nbsp;Meiyeer Yeerbo","doi":"10.1007/s43681-026-01151-x","DOIUrl":"10.1007/s43681-026-01151-x","url":null,"abstract":"<div><p>While AI ethics ensures fairness, accountability, and protection of user rights, dark patterns manipulate users to take unintended actions on digital interfaces. Related studies uncover limited insights into how reliably; human experts and AI models can detect dark patterns within a specific taxonomy. Our research fills this gap by asymmetrically examining cross-origin detection performance of human and AI/LLM evaluators (each evaluator’s ability to detect dark patterns generated by the opposite source) to understand their limitations and future potentials. Using GPT-4.1, we generated 200 UI images (with matched dark and non-dark pattern pairs) and selected 200 UI images collected 200 human-created UI screenshots from the ContextDP/AidUI dataset, based on computational, methodological, and statistical considerations. We calculated inter-rater reliability, recall, and error distribution. The results show that UX experts achieved substantial agreement (k = 0.75) and significantly higher recall (r = 0.99) over AI/LLMs. We present a novel study which explore the performance of AI/LLMs and UX experts in detecting dark patterns in UI images, and provide a benchmark dataset that could be useful to future research, while discussing empirical insights into the role, limitations, and promise of AI/LLMs in UI/UX design ethics and auditing, in realistic deployment scenarios.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-026-01151-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147829305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fuzeth: Fuzzy Delphi-based ethical intelligence for context-aware PIoT systems Fuzeth:基于模糊德尔菲的上下文感知PIoT系统伦理智能
AI and ethics Pub Date : 2026-05-04 DOI: 10.1007/s43681-026-01150-y
Bisma Gulzar, Shabir Ahmad Sofi, Sahil Sholla
{"title":"Fuzeth: Fuzzy Delphi-based ethical intelligence for context-aware PIoT systems","authors":"Bisma Gulzar,&nbsp;Shabir Ahmad Sofi,&nbsp;Sahil Sholla","doi":"10.1007/s43681-026-01150-y","DOIUrl":"10.1007/s43681-026-01150-y","url":null,"abstract":"<div><p>The proliferation of the Personal Internet of Things (PIoT) introduces significant ethical challenges, particularly in handling context-sensitive decision-making under uncertainty and dynamic user environments. Conventional Boolean-based ethical models lack the expressiveness required to address such complexities. This paper presents FuzEth, a novel framework for Fuzzy Delphi-Based Ethical Intelligence that augments traditional rule-based logic with probabilistic and fuzzy inference mechanisms. FuzEth leverages the Fuzzy Delphi Method (FDM) to systematically incorporate expert consensus, enabling adaptive calibration of Ethical Operating Principles (EOPs) and dynamic thresholding of context-aware parameters. In parallel, probabilistic reasoning facilitates the quantitative assessment of ethical outcomes by assigning contextual likelihoods to competing ethical alternatives, thus managing ambiguity and partial knowledge. The framework introduces Adaptive Ethics Modes (AEMs), governed by fuzzy membership functions, to dynamically regulate the system’s ethical behavior in response to situational changes. Experimental validation on simulated PIoT environments demonstrates that FuzEth achieves superior ethical decision fidelity, reduced false-positive ethical violations, and increased system adaptability when compared to static ethical models. The results suggest FuzEth as a viable foundation for scalable, ethically aligned PIoT deployments capable of continuous learning and autonomous ethical adjustment in real-time settings.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147829520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Governance and legitimacy in AI: developing and testing the AI Authorship Legitimacy Model (AALM) AI中的治理和合法性:开发和测试AI作者合法性模型(AALM)
AI and ethics Pub Date : 2026-05-04 DOI: 10.1007/s43681-026-01162-8
Sam Adeyemi
{"title":"Governance and legitimacy in AI: developing and testing the AI Authorship Legitimacy Model (AALM)","authors":"Sam Adeyemi","doi":"10.1007/s43681-026-01162-8","DOIUrl":"10.1007/s43681-026-01162-8","url":null,"abstract":"<div><p>The proliferation of artificial intelligence (AI) in academic writing raises critical questions about institutional legitimacy and professional identity within scholarly communities. This study develops and empirically tests the AI Authorship Legitimacy Model (AALM), a novel theoretical framework that examines how perceived AI authorship capability is associated with institutional legitimacy through the mediating role of academic identity threat, moderated by governance mechanisms. Using structural equation modelling (SEM) with data from 384 academic journal editors and reviewers across multiple disciplines, we find that perceived AI authorship capability is positively associated with institutional legitimacy (β = 0.36, <i>p</i> &lt; .001) whilst simultaneously associated with lower academic identity threat (β = −0.31, <i>p</i> &lt; .001). Academic identity threat is negatively associated with institutional legitimacy (β = −0.42, <i>p</i> &lt; .001). Contrary to conventional wisdom, strong governance mechanisms amplify rather than constrain the positive relationship between AI capability and legitimacy (β = 0.18, <i>p</i> = .003). The model explains 48% of variance in institutional legitimacy, suggesting robust predictive validity. These findings extend institutional theory by identifying identity threat as a critical psychological mechanism underlying technological legitimation, whilst providing actionable guidance for research institutions developing AI governance frameworks.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147829102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing meaningful human oversight in AI 在人工智能中设计有意义的人类监督
AI and ethics Pub Date : 2026-05-04 DOI: 10.1007/s43681-026-01147-7
Liming Zhu, Qinghua Lu, Ming Ding, Sung Une Lee, Chen Wang
{"title":"Designing meaningful human oversight in AI","authors":"Liming Zhu,&nbsp;Qinghua Lu,&nbsp;Ming Ding,&nbsp;Sung Une Lee,&nbsp;Chen Wang","doi":"10.1007/s43681-026-01147-7","DOIUrl":"10.1007/s43681-026-01147-7","url":null,"abstract":"<div><p>Human oversight is central to safe and responsible AI, but current approaches risk either collapsing agentic AI into mere automation, stripping it of its agentic character, or reducing human agency to a rubber stamp. This paper proposes a design framework that treats agency as layered: AI operative agency in task execution, and human evaluative agency in verification, steering, and substitution. Instead of demanding low-level explanations and controls over how a complex AI model works internally (i.e. internal reasoning faithfulness), we focus on high-level explanations tied to external criteria and human expert understanding (external reasoning faithfulness). This approach retains AI’s operative agency while strengthening human’s evaluative agency. We also exploit the solve-verify asymmetry by designing AI outputs so that humans can efficiently check and contest them without having to resolve the task. This paper makes three contributions. First, it develops a layered agency framework that distinguishes operative and evaluative agency and specifies where human accountability attaches in AI-enabled decision systems. Second, it reframes the explainability requirement by arguing that external reasoning faithfulness—alignment with externally articulated criteria and human expertise—is sufficient and often preferable to internal mechanistic transparency for enabling meaningful oversight. Third, it provides a structured catalogue of oversight mechanisms (e.g., structured rationales, reasoning traces, confidence signals, policy attribution, circuit breakers, appeal bundles) and four end-to-end design patterns that translate these principles into implementable system architectures. We also outline evaluation criteria for AI’s agency, human’s agency, and joint system agency. The framework provides AI ethicsts, engineers, safety teams, users, and organisational leaders with a concrete way to design meaningful and effective oversight that preserves human accountability and agency while allowing AI to retain its agentic features and autonomy.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-026-01147-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147829105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书