AI and ethicsPub Date : 2025-02-17DOI: 10.1007/s43681-025-00664-1
Erich Riesen, Mark Boespflug
{"title":"Aligning with ideal values: a proposal for anchoring AI in moral expertise","authors":"Erich Riesen, Mark Boespflug","doi":"10.1007/s43681-025-00664-1","DOIUrl":"10.1007/s43681-025-00664-1","url":null,"abstract":"<div><p>Autonomous AI agents are increasingly required to operate in contexts where human welfare is at stake, raising the imperative for them to act in ways that are morally optimal—or at least morally permissible. The value alignment research program seeks to create “beneficial AI” by aligning AI behavior with human values (Russell in Human compatible: artificial intelligence and the problem of control, Penguin, London, 2019). In this article, we propose a method for specifying permissible outcomes for AI agents that targets ideal values via moral expertise as embodied in the collective judgments of philosophical ethicists. We defend the notion that ethicists are moral experts against several objections found in the recent literature and argue that their aggregated judgments offer the epistemically best available proxy for moral truth. We recommend a systematic study of ethicists’ judgments—using tools from social psychology and social choice theory—to guide AI agents' behavior in morally complex situations.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3727 - 3741"},"PeriodicalIF":0.0,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145165929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-02-17DOI: 10.1007/s43681-025-00656-1
Michael Anderson
{"title":"Partnering with AI to derive and embed principles for ethically guided AI behavior","authors":"Michael Anderson","doi":"10.1007/s43681-025-00656-1","DOIUrl":"10.1007/s43681-025-00656-1","url":null,"abstract":"<div><p>As artificial intelligence (AI) systems, particularly large language models (LLMs), become increasingly embedded in sensitive and impactful domains, ethical failures threaten public trust and the broader acceptance of these technologies. Current approaches to AI ethics rely on reactive measures—such as keyword filters, disclaimers, and content moderation—that address immediate concerns but fail to provide the depth and flexibility required for principled decision-making. This paper introduces AI-aided reflective equilibrium (AIRE), a novel framework for embedding ethical reasoning into AI systems. Building on the philosophical tradition of deriving principles from specific cases, AIRE leverages the capabilities of AI to dynamically generate and analyze such cases and abstract and refine ethical principles from them. Through illustrative scenarios, including a self-driving car dilemma and a vulnerable individual interacting with an AI, we demonstrate how AIRE navigates complex ethical decisions by prioritizing principles like minimizing harm and protecting the vulnerable. We address critiques of scalability, complexity, and the question of “whose ethics,” highlighting AIRE’s potential to democratize ethical reasoning while maintaining rigor and transparency. Beyond its technical contributions, this paper underscores the transformative potential of AI as a collaborative partner in ethical deliberation, paving the way for trustworthy, principled systems that can adapt to diverse real-world challenges.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 3","pages":"1893 - 1910"},"PeriodicalIF":0.0,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144125579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-02-17DOI: 10.1007/s43681-025-00669-w
Gabrielle Bezerra Sales Sarlet, Viviane Ceolin Dallasta Del Grossi
{"title":"Immersive environment and neurogaming: a look at the impacts on humans and fundamental rights in the new frontiers arising from technology","authors":"Gabrielle Bezerra Sales Sarlet, Viviane Ceolin Dallasta Del Grossi","doi":"10.1007/s43681-025-00669-w","DOIUrl":"10.1007/s43681-025-00669-w","url":null,"abstract":"<div><p>This is bibliographical research that uses exploratory and documentary methodology, which, through the use of the hypothetical-deductive method, aims to analyze the possibilities of using neurogames to update an agenda for the defense of human and fundamental rights, highlight the appropriate protection of neuro-rights and, consequently, properly analyze the concept of neurocognitive integrity to understand, in view of this, the need for regulatory and legislative frameworks in harmonious synergy with the constitutional text in force in Brazil and with the main international documents in favor of effective alignment with civil society and human rights bodies, to the extent of developing and applying fair, safe, robust and reliable AI modules.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3777 - 3789"},"PeriodicalIF":0.0,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145165927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-02-17DOI: 10.1007/s43681-025-00676-x
Guido Löhr, Matthew Dennis
{"title":"Prudential reasons for designing entitled chatbots: How robot \"rights\" can improve human well-being","authors":"Guido Löhr, Matthew Dennis","doi":"10.1007/s43681-025-00676-x","DOIUrl":"10.1007/s43681-025-00676-x","url":null,"abstract":"<div><p>Can robots or chatbots be moral patients? The question of robot rights is often linked to moral reasons like precautionary principles or the ability to suffer. We argue that we have prudential reasons for building robots that can at least hold us accountable (criticize us etc.) and that we have prudential reasons to build robots that can demand that we treat them with respect. This proposal aims to add nuance to the robot rights debate by answering a key question: Why should we want to build robots that could have rights in the first place? We argue that some degree of accountability in our social relationships contributes to our well-being and flourishing. The normativity ascribed to robots will increase their social and non-social functionalities from action coordination to more meaningful relationships. Having a robot that has a certain “standing” to hold us accountable can improve our epistemic standing and satisfy our desire for recognition.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3791 - 3802"},"PeriodicalIF":0.0,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12208972/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144555978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Who is an AI Ethicist? An empirical study of expertise, skills, and profiles to build a competency framework","authors":"Mariangela Zoe Cocchiaro, Jessica Morley, Claudio Novelli, Enrico Panai, Alessio Tartaro, Luciano Floridi","doi":"10.1007/s43681-024-00643-y","DOIUrl":"10.1007/s43681-024-00643-y","url":null,"abstract":"<div><p>Over the last decade the figure of the AI Ethicist has seen significant growth in the ICT market. However, only a few studies have taken an interest in this professional profile, and they have yet to provide a normative discussion of its expertise and skills. The goal of this article is to initiate such discussion. We argue that AI Ethicists should be experts and use a heuristic to identify them. Then, we focus on their specific kind of moral expertise, drawing on a parallel with the expertise of Ethics Consultants in clinical settings and on the bioethics literature on the topic. Finally, we highlight the differences between Health Care Ethics Consultants and AI Ethicists and derive the expertise and skills of the latter from the roles that AI Ethicists should have in an organisation.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3713 - 3725"},"PeriodicalIF":0.0,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145164174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-02-07DOI: 10.1007/s43681-025-00657-0
Herman Veluwenkamp, Stefan Buijsman
{"title":"Design for operator contestability: control over autonomous systems by introducing defeaters","authors":"Herman Veluwenkamp, Stefan Buijsman","doi":"10.1007/s43681-025-00657-0","DOIUrl":"10.1007/s43681-025-00657-0","url":null,"abstract":"<div><p>This paper introduces the concept of Operator Contestability in AI systems: the principle that those overseeing AI systems (operators) must have the necessary control to be accountable for the decisions made by these algorithms. We argue that designers have a duty to ensure operator contestability. We demonstrate how this duty can be fulfilled by applying the'Design for Defeaters' framework, which provides strategies to embed tools within AI systems that enable operators to challenge decisions. Defeaters are designed to contest either the justification for the AI’s data inputs (undercutting defeaters) or the validity of the conclusions drawn from that data (rebutting defeaters). To illustrate the necessity and application of this framework, we examine case studies such as AI-driven recruitment processes, where operators need tools and authority to uncover and address potential biases, and autonomous driving systems, where real-time decision-making is crucial. The paper argues that operator contestability requires ensuring that operators have (1) epistemic access to the relevant normative reasons and (2) the authority and cognitive capacity to act on these defeaters. By addressing these challenges, the paper emphasizes the importance of designing AI systems in a way that enables operators to effectively contest AI decisions, thereby ensuring that the appropriate individuals can take responsibility for the outcomes of human-AI interactions.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3699 - 3711"},"PeriodicalIF":0.0,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00657-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145163210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-02-02DOI: 10.1007/s43681-025-00667-y
Romny Ly, Bora Ly
{"title":"Ethical challenges and opportunities in ChatGPT integration for education: insights from emerging economy","authors":"Romny Ly, Bora Ly","doi":"10.1007/s43681-025-00667-y","DOIUrl":"10.1007/s43681-025-00667-y","url":null,"abstract":"<div><p>This study delves into the ethical implications of implementing ChatGPT in Cambodian educational settings chosen for their unique pedagogical challenges in integrating AI tools. The research explores the use of ChatGPT as a supplementary educational tool to create study materials, facilitate discussions, and provide student feedback. Data from 297 students and teachers in various Cambodian educational institutions were collected through structured questionnaires and analyzed using partial least squares structural equation Modeling (PLS-SEM). The study systematically investigates the influence of data privacy concerns, perceived bias, fairness, and teacher-student dynamics on the perceived ethicality and subsequent adoption of ChatGPT. The results show that concerns about data privacy and perceived bias significantly and negatively impact ethical perceptions. However, fairness is also a mediating factor in mitigating these adverse effects. For instance, when AI tools provide equitable support, concerns about bias tend to diminish, thereby improving ethical perception. Furthermore, the reduction in face-to-face interactions, including personalized guidance, spontaneous discussions, and non-verbal cues, negatively affects the perceived ethicality of AI tools by undermining trust and reducing meaningful human connections. These insights provide practical recommendations for educational institutions to ensure responsible and equitable integration of AI technologies, ultimately supporting an ethically sound and effective learning environment.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3681 - 3698"},"PeriodicalIF":0.0,"publicationDate":"2025-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145161072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-02-02DOI: 10.1007/s43681-025-00660-5
Markus Pantsar
{"title":"The need for ethical guidelines in mathematical research in the time of generative AI","authors":"Markus Pantsar","doi":"10.1007/s43681-025-00660-5","DOIUrl":"10.1007/s43681-025-00660-5","url":null,"abstract":"<div><p>Generative artificial intelligence (AI) applications based on large language models have not enjoyed much success in symbolic processing and reasoning tasks, thus making them of little use in mathematical research. However, recently DeepMind’s AlphaProof and AlphaGeometry 2 applications have been reported to perform well in mathematical problem solving. These applications are hybrid systems combining large language models with rule-based systems, an approach sometimes called neuro-symbolic AI. In this paper, I present a scenario in which such systems are used in research mathematics, more precisely in theorem proving. In the most extreme case, such a system could be an autonomous automated theorem prover (AATP), with the potential of proving new humanly interesting theorems and even presenting them in research papers. The use of such AI applications would be transformative to mathematical practice and demand clear ethical guidelines. In addition to that scenario, I identify other, less radical, uses of generative AI in mathematical research. I analyse how guidelines set for ethical AI use in scientific research can be applied in the case of mathematics, arguing that while there are many similarities, there is also a need for mathematics-specific guidelines.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3657 - 3668"},"PeriodicalIF":0.0,"publicationDate":"2025-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00660-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145161073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-02-02DOI: 10.1007/s43681-025-00662-3
Torben Swoboda, Lode Lauwaert
{"title":"Can artificial intelligence embody moral values?","authors":"Torben Swoboda, Lode Lauwaert","doi":"10.1007/s43681-025-00662-3","DOIUrl":"10.1007/s43681-025-00662-3","url":null,"abstract":"<div><p>The neutrality thesis holds that technology cannot be laden with values—it is inherently value-neutral. This long-standing view has faced critiques, but much of the argumentation against neutrality has focused on traditional, non-smart technologies like bridges and razors. In contrast, artificial intelligence (AI) is a smart technology increasingly used in high-stakes domains like healthcare, finance, and policing, where its decisions can cause moral harm. In this paper, we argue that AI, particularly artificial agents that autonomously make decisions to pursue their goals, challenge the neutrality thesis. Our central claim is that the computational models underlying artificial agents can integrate representations of moral values such as fairness, honesty and avoiding harm. We provide a conceptual framework discussing the neutrality thesis, values, and AI. Moreover, we examine two approaches to designing computational models of morality, artificial conscience and ethical prompting, and present empirical evidence from text-based game environments that artificial agents with such models exhibit more ethical behavior compared to agents without these models. The findings support that AI can embody moral values, which contradicts the claim that all technologies are necessarily value-neutral.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3669 - 3680"},"PeriodicalIF":0.0,"publicationDate":"2025-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145161004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Standards, frameworks, and legislation for artificial intelligence (AI) transparency","authors":"Brady Lund, Zeynep Orhan, Nishith Reddy Mannuru, Ravi Varma Kumar Bevara, Brett Porter, Meka Kasi Vinaih, Padmapadanand Bhaskara","doi":"10.1007/s43681-025-00661-4","DOIUrl":"10.1007/s43681-025-00661-4","url":null,"abstract":"<div><p>The global landscape of transparency standards, frameworks, and legislation for artificial intelligence (AI) shows an increasing focus on building trust, accountability, and ethical deployment. This paper presents comparative analysis of key frameworks for AI transparency, such as the IEEE P7001 standard and the CLeAR Documentation Framework, highlighting how regions like the United States, European Union, China, and Japan are addressing the need for transparent and trustworthy AI systems. Common themes across these standards include the need for tiered transparency levels based on system risk and impact, continuous documentation updates throughout the development and revision processes, and the production of explanations tailored to various stakeholder groups. Several key challenges arise in the development of AI transparency standards, frameworks, and legislation, including balancing transparency with privacy, ensuring intellectual property rights, and addressing security concerns. Promoting adaptable, sector-specific transparency regulatory structures is critical in the development of frameworks flexible enough to keep pace with AI’s rapid technological advancement. These insights contribute to a growing body of literature on how best to develop transparency regulatory structures that not only build trust in AI but also support innovation across industries.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3639 - 3655"},"PeriodicalIF":0.0,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145170857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}