AI and ethicsPub Date : 2025-05-20DOI: 10.1007/s43681-025-00733-5
Ninell Oldenburg, Anders Søgaard
{"title":"Navigating the informativeness-compression trade-off in XAI","authors":"Ninell Oldenburg, Anders Søgaard","doi":"10.1007/s43681-025-00733-5","DOIUrl":"10.1007/s43681-025-00733-5","url":null,"abstract":"<div><p>Every explanation faces a trade-off between informativeness and compression (Kinney and Lombrozo, 2022). On the one hand, we want to aim for as much detailed and correct information as possible, informativeness, on the other hand, we want to ensure that a human can process and comprehend the explanation, compression. Current methods in eXplainable AI (XAI) try to satisfy this trade-off <i>statically</i>, outputting <i>one</i> fixed, non-adjustable explanation that sits somewhere on the spectrum between informativeness and compression. However, some current XAI methods fail to meet the expectations of users and developers such that several failures have been reported in the literature which often come with user-specific knowledge gaps and good-enough understanding. In this work, we propose <i>Dynamic XAI</i> to navigate the trade-off interactively. We argue how this simple idea can help overcome the trade-off by eliminating gaps in user-specific understanding and preventing misunderstandings. We conclude by situating our approach within the broader ethical considerations around XAI.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4925 - 4942"},"PeriodicalIF":0.0,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00733-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145122186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-05-19DOI: 10.1007/s43681-025-00738-0
Kshemaahna Nagi
{"title":"A standard reporting system for the environmental impact of machine learning","authors":"Kshemaahna Nagi","doi":"10.1007/s43681-025-00738-0","DOIUrl":"10.1007/s43681-025-00738-0","url":null,"abstract":"<div><p>The growing demand of compute and resources required for developing machine learning models has led to an increased adverse impact on the environment. However, there is a lack of data concerning the environmental footprint of machine learning models available in the public domain. Even when data is available, important parameters such as water consumption are ignored. This paper aims to provide a standardized benchmark to report the environmental impact of individual machine learning models in terms of energy use, water consumption and carbon footprint. The proposed documentation system, referred to as the EnvCard, is intended to be an analogue to the model card for model reporting, helping stakeholders make more resource aware decisions. EnvCards are intended to be a stepping stone towards increasing transparency about the unintended consequences of the accelerated development of Artificial Intelligence technologies.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4915 - 4924"},"PeriodicalIF":0.0,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145122171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-05-14DOI: 10.1007/s43681-025-00743-3
Daniel W. Tigard
{"title":"On bullshit, large language models, and the need to curb your enthusiasm","authors":"Daniel W. Tigard","doi":"10.1007/s43681-025-00743-3","DOIUrl":"10.1007/s43681-025-00743-3","url":null,"abstract":"<div><p>Amidst all the hype around artificial intelligence (AI), particularly regarding large language models (LLMs), generative AI and chatbots like ChatGPT, a surge of headlines is instilling caution and even explicitly calling “bullshit” on such technologies. Should we follow suit? What exactly does it mean to call bullshit on an AI program? When is doing so a good idea, and when might it not be? With this paper, I aim to provide a brief guide on how to call bullshit on ChatGPT and related systems. In short, one must understand the basic nature of LLMs, how they function and what they produce, and one must recognize bullshit. I appeal to the prominent work of the late Harry Frankfurt and suggest that recent accounts jump too quickly to the conclusion that LLMs are bullshitting. In doing so, I offer a more level-headed approach to calling bullshit, and accordingly, a way of navigating some of the recent critiques of generative AI systems.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4863 - 4873"},"PeriodicalIF":0.0,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00743-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145121905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-05-14DOI: 10.1007/s43681-025-00698-5
Zoé Roy-Stang, Jim Davies
{"title":"Human biases and remedies in AI safety and alignment contexts","authors":"Zoé Roy-Stang, Jim Davies","doi":"10.1007/s43681-025-00698-5","DOIUrl":"10.1007/s43681-025-00698-5","url":null,"abstract":"<div><p>Errors in judgment can undermine artificial intelligence (AI) safety and alignment efforts, leading to potentially catastrophic consequences. Attitudes towards AI range from total support to total opposition, and there is little agreement on how to approach the issues. We discuss how relevant cognitive biases could affect the general public’s perception of AI developments and risks associated with advanced AI. We focus on how biases could affect decision-making in key contexts of AI development, safety, and governance. We review remedies that could reduce or eliminate these biases to improve resource allocation, prioritization, and planning. We conclude with a summary list of ‘information consumer remedies’ which can be applied at the individual level and ‘information system remedies’ which can be incorporated into decision-making structures, including decision support systems, to improve the quality of decision-making. We also provide suggestions for future research on biases and remedies that could contribute to mitigating global catastrophic risks in the context of emerging, high-risk, high-reward technologies.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4891 - 4913"},"PeriodicalIF":0.0,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145121904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-05-14DOI: 10.1007/s43681-025-00751-3
Jana Mišić, Rinie van Est, Linda Kool
{"title":"Good governance of public sector AI: a combined value framework for good order and a good society","authors":"Jana Mišić, Rinie van Est, Linda Kool","doi":"10.1007/s43681-025-00751-3","DOIUrl":"10.1007/s43681-025-00751-3","url":null,"abstract":"<div><p>Good governance of AI-supported public services means that they should function in a democratic and rule-of-law manner (“good order”) and consider the just treatment and wellbeing of citizens (“good society”). To gain insight into relevant “good order” and “good society” values, this study uses AI ethics and public administration literature to develop a comprehensive value framework for the good governance of public sector AI. We identify values pivotal to the AI-public sector nexus through a dual-phase analysis. First, we identify seven core values: five “good order” core values (responsiveness, effectiveness, procedural justice, resilience, and counterbalance) and two “good society” core values (wellbeing, social justice). Subsequently, delving into 33 studies spanning AI ethics and public administration, we identify operational values related to the core values. The operational values provide further interpretation of the core values and operationalize them. This second round in our research also shows that the seven core values found during the first round indeed account for value considerations encountered by scholars so far. In this way, we arrive at a robust value framework for the good governance of AI use in the public sector. The framework is not a one-size-fits-all recipe for public sector AI but a guide for policymakers to consider both democratic and ethical values. It can address gaps in both research fields, analyze moral dilemmas in AI policy tools like public-private partnerships, and aid policymakers in blending abstract values with contextual decision-making.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4875 - 4889"},"PeriodicalIF":0.0,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00751-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145121906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-05-13DOI: 10.1007/s43681-025-00739-z
Wael Badawy
{"title":"Algorithmic sovereignty and democratic resilience: rethinking AI governance in the age of generative AI","authors":"Wael Badawy","doi":"10.1007/s43681-025-00739-z","DOIUrl":"10.1007/s43681-025-00739-z","url":null,"abstract":"<div><p>\u0000 The rise of generative artificial intelligence (AI) is challenging governance paradigms, raising concerns about public trust, disinformation, and democratic resilience. While these technologies offer unprecedented efficiency and innovation, they also risk amplifying bias, eroding transparency, and centralizing power within proprietary platforms. This paper reframes algorithmic sovereignty as the democratic capacity to regulate and audit AI systems, ensuring they align with ethical, civic, and institutional norms. Using a mixed-methods approach—content analysis, expert interviews, and comparative policy review—we explore how regulatory frameworks in the EU, China, the U.S., and other regions address these challenges. By clarifying the scope of algorithmic governance and integrating counterarguments around disinformation and AI misuse, we develop a multilayered framework for human-centered AI oversight. We also examine geopolitical tensions shaping global digital sovereignty and propose actionable strategies to strengthen trust and civic participation. Figures highlight regional governance effectiveness, trust dynamics, and regulatory orientations. We conclude that algorithmic sovereignty must evolve as an interdisciplinary and participatory governance goal that reinforces democracy rather than undermining it.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4855 - 4862"},"PeriodicalIF":0.0,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145121867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-05-09DOI: 10.1007/s43681-025-00729-1
Wael Badawy
{"title":"The ethical use and development of artificial intelligence (AI) strategy in Egypt: identifying gaps and recommendations","authors":"Wael Badawy","doi":"10.1007/s43681-025-00729-1","DOIUrl":"10.1007/s43681-025-00729-1","url":null,"abstract":"<div><p>Egypt’s 2024 national strategy on the ethical use and development of artificial intelligence (AI) represents a significant milestone in aligning technological advancement with international human rights and responsible innovation principles. While the strategy articulates commendable ethical pillars—such as transparency, privacy, and accountability—it remains largely aspirational in nature, with limited provisions for enforcement, sector-specific guidance, or public engagement. This paper presents a structured qualitative policy analysis of the strategy, identifying twelve key governance gaps through a comparative review method grounded in international frameworks (e.g., UNESCO, EU AI Act, Canada’s Directive on Automated Decision-Making) and regional strategies from the UAE, Saudi Arabia, and South Africa. Additionally, an embedded case study from Egypt’s healthcare sector highlights the practical implications of these policy shortcomings. Key gaps include the absence of legal mandates, exclusion of informal sector needs, limited data sovereignty, and a lack of gender-responsive AI auditing. To address these challenges, the study proposes actionable reforms including the establishment of an independent AI ethics regulator, the adoption of sector-specific ethical toolkits, and the integration of AI ethics into Egypt’s Vision 2030 and national education system. This research offers one of the first academic assessments of Egypt’s AI ethics framework and contributes to a growing body of literature on responsible AI governance in the Global South.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3579 - 3591"},"PeriodicalIF":0.0,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145163367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-04-26DOI: 10.1007/s43681-025-00737-1
Antonio Araújo
{"title":"ChatGPT-based posthumous memories of a terminally ill patient: metabioethics and generative AI at the service of palliative medicine","authors":"Antonio Araújo","doi":"10.1007/s43681-025-00737-1","DOIUrl":"10.1007/s43681-025-00737-1","url":null,"abstract":"<div><p>Any approach than seeks a bioethical justification for the employment of ChatGPT technology in <i>post mortem</i> interactions has a potentially significant impact on palliative medicine. Indeed, it can lead terminally ill patients to improve their own autonomy by encouraging them to customise, train or personalise ChatGPT in order for such a tool enables some <i>post mortem</i> interlocution of their “memories” (entropic audiovisual records on key issues) with friends and relatives. Conjecture that may trigger positive effects regarding the end-of-life perception/consciousness as a relevant existential value.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3455 - 3459"},"PeriodicalIF":0.0,"publicationDate":"2025-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145145139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-04-23DOI: 10.1007/s43681-025-00726-4
Rinu Ann Sebastian, Kris Ehinger, Tim Miller
{"title":"Do we need watchful eyes on our workers? Ethics of using computer vision for workplace surveillance","authors":"Rinu Ann Sebastian, Kris Ehinger, Tim Miller","doi":"10.1007/s43681-025-00726-4","DOIUrl":"10.1007/s43681-025-00726-4","url":null,"abstract":"<div><p>In this paper, we critically examine the relevant ethical concerns of using computer vision-based surveillance in workplaces and propose an intent—and priority-based ethical framework for such systems. With the growing capabilities of computer vision technologies, its application in monitoring workplaces brings forth significant concerns. Organisations increasingly leverage computer vision for workplace surveillance to improve productivity, safety, and security. Unlike electronic surveillance techniques that monitor workers’ wire or electronic communication, computer vision-based workplace surveillance (CVWS) captures highly detailed visual and personal information about workers, including body language, emotional state, and actions. This makes CVWS potentially more intrusive than traditional electronic surveillance, raising a more comprehensive range of ethical considerations. However, this topic has received minimal attention in the current literature. Our proposed framework combines the intention for deploying surveillance with the moral notions of privacy, data security, fairness, transparency, explainability, autonomy, beneficence, nonmaleficence, dignity, and reliability to morally scrutinise CVWS systems. The paper proposes a second framework that aims to establish accountability among key stakeholders of the CVWS system. Further, we discuss two critical questions to consider when evaluating the necessity for CVWS systems in a work environment. In practice, this work will serve as a groundwork for stakeholders such as technical developers, employers, and regulatory and advocacy teams to make ethical design decisions during the developmental, operational, and maintenance stages of CVWS systems and devise proactive strategies to minimise potential harm.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3557 - 3577"},"PeriodicalIF":0.0,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00726-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145167934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-04-22DOI: 10.1007/s43681-025-00727-3
Sylvia Martin, Aneta Piperkova, Jana Zschüntzsch, Edith Gross Sky, Joern Schenk, Daniel Theisen, Gergana Kyosovska-Peshtenska, Mats Hansson
{"title":"“Code of ethical practice” for sharing and access to personal data for AI-/ ML-based technologies in rare diseases genetic NBS research project: a collaborative construction in a European IMI project","authors":"Sylvia Martin, Aneta Piperkova, Jana Zschüntzsch, Edith Gross Sky, Joern Schenk, Daniel Theisen, Gergana Kyosovska-Peshtenska, Mats Hansson","doi":"10.1007/s43681-025-00727-3","DOIUrl":"10.1007/s43681-025-00727-3","url":null,"abstract":"<div><p>The early diagnosis of rare diseases (RDs) is crucial for timely intervention and effective management. The Screen4Care project seeks to accelerate this process by combining newborn screening with artificial intelligence (AI) and machine-learning (ML) tools. The Screen4Care interdisciplinary approach aims to reduce the lengthy diagnostic journey for individuals with RDs and improve their quality of life. A Code of Ethical Practice (CoEP) was developed to ensure the ethical handling of personal data in AI/ML-based screening. This CoEP outlines standards for how Screen4Care partner organizations can share and access patient data while minimizing the risk of misuse. Developed through the combined efforts of expert groups, European teams, advisory bodies, and patient organizations, the CoEP ensures a secure framework for data handling. This establishes a robust set of ethical principles, ensuring that data collection and sharing are conducted in a safe and responsible manner. This framework supports innovative AI/ML solutions, optimizing the diagnosis, treatment, and management of RDs, while safeguarding the interests of individuals and their families.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4843 - 4853"},"PeriodicalIF":0.0,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00727-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145122267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}