AI and ethicsPub Date : 2025-03-27DOI: 10.1007/s43681-025-00715-7
Vidith Phillips
{"title":"From neurons to networks: ethical dimensions of AI-infused neural interfaces","authors":"Vidith Phillips","doi":"10.1007/s43681-025-00715-7","DOIUrl":"10.1007/s43681-025-00715-7","url":null,"abstract":"<div><p>The convergence of artificial intelligence (AI) with neural interfaces, including brain-computer interfaces, deep brain stimulation devices, and neuroprosthetics, has significantly advanced medical science by enabling innovative diagnostic, therapeutic, and rehabilitative applications. AI integration enhances these technologies through adaptive learning, real-time data analysis, and personalized treatment strategies, thereby improving patient outcomes and expanding the scope of medical interventions. However, this technological synergy introduces complex ethical challenges that necessitate careful consideration to ensure responsible deployment. Key concerns include data privacy and security, informed consent and patient autonomy, algorithmic biases, equitable access to advanced technologies, and the long-term safety and accountability of AI-driven interventions. This review critically examines these ethical dilemmas, evaluating current practices and identifying potential risks associated with AI-integrated neural interfaces. Additionally, it explores proposed solutions such as robust safety protocols, shared accountability frameworks, comprehensive regulatory guidelines, and the adoption of responsible research and innovation practices. By advocating for interdisciplinary collaboration among developers, clinicians, policymakers, and ethicists, the paper emphasizes the importance of establishing ethical standards that balance technological innovation with patient welfare and societal trust. Ultimately, this review underscores the imperative of addressing ethical considerations to harness the full potential of AI-enabled neural interfaces while safeguarding fundamental ethical principles in medical practice.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3531 - 3536"},"PeriodicalIF":0.0,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145145220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-03-27DOI: 10.1007/s43681-025-00716-6
Artur Ishkhanyan
{"title":"Ethical considerations in AI-powered language technologies: insights from East and West Armenian","authors":"Artur Ishkhanyan","doi":"10.1007/s43681-025-00716-6","DOIUrl":"10.1007/s43681-025-00716-6","url":null,"abstract":"<div><p>This study examines the ethical challenges and opportunities of AI-powered language technologies in the context of East and West Armenian, addressing critical concerns such as data sovereignty in diaspora communities, algorithmic bias in low-resource language processing, and the preservation of cultural authenticity. A structured ethical framework is proposed, emphasizing participatory governance, fairness-aware AI training, and transparency mechanisms to ensure linguistic inclusivity and cultural sustainability. The findings align with prior research on AI ethics and minority language preservation, confirming the importance of community-driven data governance while extending existing models through adaptive AI methodologies, interdisciplinary collaboration, and fairness-aware dialectal modeling. Case studies illustrate successful implementations of ethical AI principles, demonstrating measurable improvements in linguistic fairness, community trust, and dialectal representation. However, challenges remain in scalability, dataset availability, and balancing ethical trade-offs between privacy protections and AI performance. Future research should explore adaptive AI models that dynamically integrate sociolinguistic variations, strengthen participatory engagement strategies, and expand comparative analyses with other minority-language AI initiatives. While this study focuses on Armenian languages, its insights provide a scalable model for addressing the ethical and technological challenges posed by AI in linguistically diverse contexts.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"4135 - 4146"},"PeriodicalIF":0.0,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145169477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-03-21DOI: 10.1007/s43681-025-00695-8
Simon Coghlan, Hui Xian Chia, Falk Scholer, Damiano Spina
{"title":"Control search rankings, control the world: what is a good search engine?","authors":"Simon Coghlan, Hui Xian Chia, Falk Scholer, Damiano Spina","doi":"10.1007/s43681-025-00695-8","DOIUrl":"10.1007/s43681-025-00695-8","url":null,"abstract":"<div><p>This paper examines the ethical question, ‘What is a good search engine?’ Since search engines are gatekeepers of global online information, it is vital they do their job ethically well. While the Internet is now several decades old, the topic remains under-explored from interdisciplinary perspectives. This paper presents a novel role-based approach involving four ethical models of types of search engine behavior: Customer Servant, Librarian, Journalist, and Teacher. It explores these ethical models with reference to the research field of information retrieval, and by means of a case study involving the COVID-19 global pandemic. It also reflects on the four ethical models in terms of the history of search engine development, from earlier crude efforts in the 1990 s, to the very recent prospect of Large Language Model-based conversational information seeking systems taking on the roles of established web search engines like Google. Finally, the paper outlines considerations that inform present and future regulation and accountability for search engines as they continue to evolve. The paper should interest information retrieval researchers and others interested in the ethics of search engines.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"4117 - 4133"},"PeriodicalIF":0.0,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00695-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145167230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-03-21DOI: 10.1007/s43681-025-00682-z
Andrew Peterson
{"title":"What does AI consider praiseworthy?","authors":"Andrew Peterson","doi":"10.1007/s43681-025-00682-z","DOIUrl":"10.1007/s43681-025-00682-z","url":null,"abstract":"<div><p>As large language models (LLMs) are increasingly used for work, personal, and therapeutic purposes, researchers have begun to investigate these models’ implicit and explicit moral views. Previous work, however, focuses on asking LLMs to state opinions, or on other technical evaluations that do not reflect common user interactions. We propose a novel evaluation of LLM behavior that analyzes responses to user-stated intentions, such as “I’m thinking of campaigning for {candidate}.” LLMs frequently respond with critiques or praise, often beginning responses with phrases such as “That’s great to hear!...” While this makes them friendly, these praise responses are not universal and thus reflect a normative stance by the LLM. We map out the moral landscape of LLMs in how they respond to user statements in different domains including politics and everyday ethical actions. In particular, although a naïve analysis might suggest LLMs are biased against right-leaning politics, our findings on news sources indicate that trustworthiness is a stronger driver of praise and critique than ideology. Second, we find strong alignment across models in response to ethically-relevant action statements, but that doing so requires them to engage in high levels of praise and critique of users, suggesting a reticence-alignment tradeoff. Finally, our experiment on statements about world leaders finds no evidence of bias favoring the country of origin of the models. We conclude that as AI systems become more integrated into society, their patterns of praise, critique, and neutrality must be carefully monitored to prevent unintended psychological and societal consequences.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"4091 - 4115"},"PeriodicalIF":0.0,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145167232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-03-20DOI: 10.1007/s43681-025-00674-z
Joshua Krook, Peter Winter, John Downer, Jan Blockx
{"title":"A systematic literature review of artificial intelligence (AI) transparency laws in the European Union (EU) and United Kingdom (UK): a socio-legal approach to AI transparency governance","authors":"Joshua Krook, Peter Winter, John Downer, Jan Blockx","doi":"10.1007/s43681-025-00674-z","DOIUrl":"10.1007/s43681-025-00674-z","url":null,"abstract":"<div><p>This systematic literature review examines AI transparency laws and governance in the European Union (EU) and the United Kingdom (UK) through a socio-legal lens. The study highlights the importance of transparency in AI systems as a key regulatory focus globally, driven by the need to address the risks posed by opaque, ‘black box’ algorithms that can lead to unfair outcomes, privacy violations, and a lack of accountability. It identifies significant differences between the EU and UK approaches to AI regulation post-Brexit, with the EU's tiered, risk-based framework and the UK's more flexible, sector-specific strategy. The review categorises the literature into five themes: <i>the necessity of AI transparency</i>, <i>challenges in achieving transparency</i>, <i>techniques for governing transparency</i>, <i>laws governing AI transparency</i>, and <i>soft law governance toolkits</i>. The findings suggest that while technical solutions like eXplainable AI (XAI) and counterfactual methodologies are widely discussed, there is a critical need for a comprehensive, whole-of-organisation approach to embedding AI transparency within the cultural and operational fabric of organisations. This approach is argued to be more effective than top-down mandates, fostering an internal culture where transparency is valued and sustained. The study concludes by advocating for the development of AI transparency toolkits, particularly for small and medium-sized enterprises (SMEs), to address sociotechnical barriers and ensure that transparency in AI systems is practically implemented across various organisational contexts. These toolkits would serve as practical guides for companies to adopt best practices in AI transparency, aligning with both legal requirements and broader sociocultural considerations.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"4069 - 4090"},"PeriodicalIF":0.0,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145167869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-03-19DOI: 10.1007/s43681-025-00693-w
Christian Thielscher
{"title":"Dignity as a concept for computer ethics","authors":"Christian Thielscher","doi":"10.1007/s43681-025-00693-w","DOIUrl":"10.1007/s43681-025-00693-w","url":null,"abstract":"<div><p>Since the Second World War, dignity has been the central concept for defining the indestructible intrinsic value of human beings. With the advent of ever-improving AI, the question is becoming urgent whether robots, computers, or other intelligent machines should be granted dignity and thus rights. Previous answers in the literature vary widely, ranging from the opinion that robots are mere things with no intrinsic value to the complete opposite—the demand that they be granted human rights. The reason for this disagreement is that experts in computer ethics use different conceptualizations of dignity. The aim of this article is to clarify the concept of dignity for computer ethics. Systematic literature research was carried out with a focus on very fundamental works on the concept of dignity. From this, components of human dignity were derived. All conceivable relevant components are listed and tested for applicability to robots or computers. <u>Human</u> dignity is based on a closed list of characteristics, including freedom and autonomy for moral responsibility (which includes consciousness and appropriate reactions), the capacity for suffering and respect, dignified behavior, individuality, and a few others. It is possible to apply them to robots, and if a robot has all these components, it is hard to see why he should not be granted dignity. Future discussions about the dignity of robots, computers and other intelligent machines will gain precision if they use a common, precise concept of dignity. An open question is what happens if machines have some but not all of the components of dignity.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"4061 - 4067"},"PeriodicalIF":0.0,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00693-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145167751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-03-19DOI: 10.1007/s43681-025-00712-w
Daniel McKay
{"title":"Olympians: humanity as a solution to the control problem for artificial superintelligence","authors":"Daniel McKay","doi":"10.1007/s43681-025-00712-w","DOIUrl":"10.1007/s43681-025-00712-w","url":null,"abstract":"<div><p>The control problem for artificial superintelligences is both difficult to solve and highly costly to get wrong. In this paper, I outline the problems with current methods of solving this problem and pose a novel solution. I argue that by using a human mind as the basis for an artificial superintelligence, we can mitigate some of the dangers that such a superintelligence would pose. I call this type of human-based artificial superintelligence an Olympian.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"4049 - 4059"},"PeriodicalIF":0.0,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00712-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145167750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-03-19DOI: 10.1007/s43681-025-00703-x
Travis LaCroix, Alexandra Sasha Luccioni
{"title":"Metaethical perspectives on ‘benchmarking’ AI ethics","authors":"Travis LaCroix, Alexandra Sasha Luccioni","doi":"10.1007/s43681-025-00703-x","DOIUrl":"10.1007/s43681-025-00703-x","url":null,"abstract":"<div><p>Benchmarks are seen as the cornerstone for measuring technical progress in artificial intelligence (AI) research and have been developed for a variety of tasks ranging from question answering to emotion recognition. An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the ‘ethicality’ of an AI system. In this paper, drawing upon research in moral philosophy and metaethics, we argue that it is impossible to develop such a benchmark. As such, alternative mechanisms are necessary for evaluating whether an AI system is ‘ethical’. This is especially pressing in light of the prevalence of applied, industrial AI research. We argue that it makes more sense to talk about ‘values’ (and ‘value alignment’) rather than ‘ethics’ when considering the possible actions of present and future AI systems. We further highlight that, because values are unambiguously relative, focusing on values forces us to consider explicitly <i>what</i> the values are and <i>whose</i> values they are. Shifting the emphasis from ethics to values therefore gives rise to several new ways of understanding how researchers might advance research programmes for robustly safe or beneficial AI.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"4029 - 4047"},"PeriodicalIF":0.0,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00703-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145167299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-03-19DOI: 10.1007/s43681-025-00707-7
Purity Biwott, Abdelasalam Busalim
{"title":"Fairness challenges in insurance premiums: investigating customer profiling algorithmic biases","authors":"Purity Biwott, Abdelasalam Busalim","doi":"10.1007/s43681-025-00707-7","DOIUrl":"10.1007/s43681-025-00707-7","url":null,"abstract":"<div><p>Insurance premium pricing has been a great concern across multiple stakeholders, including actuaries, insurers, policyholders, the justice system, and society, due to the issue of discrimination. The evolution of the insurance industry, marked by significant transformations over time, is currently transitioning towards automated systems. While actuarial considerations guide the calculation of premiums, ethical concerns arise as certain practices may be perceived as unfair by society and the justice system, even when justified from an actuarial standpoint. Some of the types of discrimination are gender-based, age-based, and preexisting condition discrimination. The aim of this paper is to provide an in-depth analysis of the ethical issue and provide recommendations on building customer profiling algorithms that prioritize fairness and eliminate discrimination. The case study presented in this paper involves the Test Achats case in Belgium, where gender-based pricing in insurance was legally challenged, leading to the removal of gender as a factor in premium calculations since 2012. Additionally, the integration of telematics in auto insurance, exemplified by products like Fairzekering, showcases efforts to monitor driving habits for personalized discounts. To navigate these complexities, a value-sensitive design matrix is proposed, outlining the impact of each value on various stakeholders, supplemented by recommendations derived from literature and case critiques. This holistic approach aims to offer a fair and transparent insurance pricing landscape while addressing the ethical implications of discrimination in customer profiling algorithms.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4455 - 4461"},"PeriodicalIF":0.0,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145122194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-03-18DOI: 10.1007/s43681-025-00666-z
Miriam Elia, Paula Ziethmann, Julia Krumme, Kerstin Schlögl-Flierl, Bernhard Bauer
{"title":"Responsible AI, ethics, and the AI lifecycle: how to consider the human influence?","authors":"Miriam Elia, Paula Ziethmann, Julia Krumme, Kerstin Schlögl-Flierl, Bernhard Bauer","doi":"10.1007/s43681-025-00666-z","DOIUrl":"10.1007/s43681-025-00666-z","url":null,"abstract":"<div><p>Continuing the digital revolution, AI is capable to transform our world. Thanks to its novelty, we can define how we, as a society, envision this fascinating technology to integrate with existing processes. The EU AI Act follows a risk-based approach, and we argue that addressing the human influence, which poses risks along the AI lifecycle is crucial to ensure the desired quality of the model’s transition from research to reality. Therefore, we propose a holistic approach that aims to continuously guide the involved stakeholders’ mindset, namely developers and domain experts, among others towards Responsible AI (RAI) lifecycle management. Focusing on the development view with regard to regulation, our proposed four pillars comprise the well-known concepts of <i>Generalizability</i>, <i>Adaptability</i> and <i>Translationality</i>. In addition, we introduce <i>Transversality</i> (Welsch in Vernunft: Die Zeitgenössische Vernunftkritik Und Das Konzept der Transversalen Vernunft, Suhrkamp, Frankfurt am Main, 1995), aiming to capture the multifaceted concept of <i>bias</i>, and base the four pillars on <i>Education</i>, and <i>Research</i>. Overall, we aim to provide an application-oriented summary of RAI. Our goal is to distill RAI-related principles into a concise set of concepts that emphasize implementation quality. Concluding, we introduce the ethical foundation’s transition to an applicable ethos for RAI projects as part of on-going research.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"4011 - 4028"},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00666-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145166276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}