AI and ethicsPub Date : 2025-08-25DOI: 10.1007/s43681-025-00829-y
Erdal Yabalak
{"title":"The dual edge of AI: advancing and endangering scientific integrity in chemistry","authors":"Erdal Yabalak","doi":"10.1007/s43681-025-00829-y","DOIUrl":"10.1007/s43681-025-00829-y","url":null,"abstract":"<div><p>The integration of artificial intelligence (AI) into environmental and analytical chemistry presents both transformative opportunities and serious risks to scientific integrity. AI offers increasingly advanced capabilities in data interpretation, process automation, and predictive modeling, while its uncritical use—particularly in generating scientific texts—raises concerns about bias, error propagation, and ethical accountability. This article is a conceptual and critical analysis, not an experimental report. It critically examines the dual impact of AI on scientific research, highlighting potential threats to rigor, transparency, and authorship. The article also discusses the transformative benefits of AI in enhancing analytical efficiency, real-time monitoring, and predictive modeling in chemical research. The article emphasizes the need for robust oversight, ethical frameworks, and the preservation of human expertise in AI-assisted studies. By exploring AI-generated outputs and evaluating their implications through expert critique, this work aims to foster responsible and informed integration of AI in chemistry. Recommendations are provided for researchers, editors, and institutions to safeguard the credibility and trustworthiness of scientific communication in the era of AI.</p><h3>Graphical abstract</h3><div><figure><div><div><picture><source><img></source></picture></div></div></figure></div></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4635 - 4643"},"PeriodicalIF":0.0,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145122388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-08-25DOI: 10.1007/s43681-025-00721-9
Juveria Afreen, Mahsa Mohaghegh, Maryam Doborjeh
{"title":"Systematic literature review on bias mitigation in generative AI","authors":"Juveria Afreen, Mahsa Mohaghegh, Maryam Doborjeh","doi":"10.1007/s43681-025-00721-9","DOIUrl":"10.1007/s43681-025-00721-9","url":null,"abstract":"<div><p>In the era of rapid technological advancement, Artificial Intelligence (AI) is a transformative force, permeating diverse facets of society. However, bias concerns have gained prominence as AI systems become integral to decision-making processes. Bias can exert significant and extensive consequences, influencing individuals, groups, and society. The presence of bias in generative AI or machine learning systems can produce content that exhibits discriminating tendencies, perpetuates stereotypes, and contributes to inequalities. Artificial intelligence (AI) systems have the potential to be employed in various contexts that involve sensitive settings, where they are tasked with making significant judgements that can have profound impacts on individuals' lives. Consequently, it is important to establish measures that prevent these decisions from exhibiting discriminating tendencies against specific groups or populations. This exclusive exploration embarks on a comprehensive journey through the nuanced landscape of bias in AI, unravelling its intricate layers to discern different types, pinpoint underlying causes, and illuminate innovative mitigation strategies. Delving deeper, we investigate the roots of bias in AI, revealing a complex interplay of historical legacies, societal imbalances, and algorithmic intricacies. Unravelling the causes involves exploring unintentional reinforcement of existing biases, reliance on incomplete or biased training data, and the potential amplification of disparities when AI systems are deployed in diverse real-world scenarios. Various domains such as text, image, audio, video and more significant advancements in Generative Artificial Intelligence (GAI) were evidenced. Multiple challenges and proliferation of biases occur in different perspectives considered in the study. Against this backdrop, the exploration transitions to a proactive stance, offering a glimpse into cutting-edge mitigation strategies. Diverse and inclusive datasets emerge as a cornerstone, ensuring representative input for AI models. Ethical considerations throughout the development lifecycle and ongoing monitoring mechanisms prove pivotal in mitigating biases that may arise during training or deployment. Technical and non-technical strategies come to the forefront of pursuing fairness and equity in AI. The paper underscores the importance of interdisciplinary collaboration, emphasising that a collective effort spanning developers, ethicists, policymakers, and end-users is paramount for effective bias mitigation. As AI continues its ascent into various spheres of our lives, understanding, acknowledging, and addressing bias becomes an imperative. This exploration seeks to contribute to the discourse, fostering a deeper comprehension of the challenges posed by bias in AI and inspiring a collective commitment to building equitable, trustworthy AI systems for the future.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4789 - 4841"},"PeriodicalIF":0.0,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00721-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145122390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-08-13DOI: 10.1007/s43681-025-00797-3
Chengyuan Deng, Yiqun Duan, Xin Jin, Heng Chang, Yijun Tian, Han Liu, Yichen Wang, kuofeng Gao, Henry Peng Zou, Yiqiao jin, Yijia Xiao, Shenghao Wu, Zongxing Xie, Weimin Lyu, Sihong He, Lu Cheng, Haohan Wang, Jun Zhuang
{"title":"Deconstructing the ethics of large language models from long-standing issues to new-emerging dilemmas: a survey","authors":"Chengyuan Deng, Yiqun Duan, Xin Jin, Heng Chang, Yijun Tian, Han Liu, Yichen Wang, kuofeng Gao, Henry Peng Zou, Yiqiao jin, Yijia Xiao, Shenghao Wu, Zongxing Xie, Weimin Lyu, Sihong He, Lu Cheng, Haohan Wang, Jun Zhuang","doi":"10.1007/s43681-025-00797-3","DOIUrl":"10.1007/s43681-025-00797-3","url":null,"abstract":"<div><p>Large Language Models (LLMs) have achieved unparalleled success across diverse language modeling tasks in recent years. However, this progress has also intensified ethical concerns, impacting the deployment of LLMs in everyday contexts. This paper provides a comprehensive survey of ethical challenges associated with LLMs, from longstanding issues such as copyright infringement, systematic bias, and data privacy, to emerging problems like truthfulness and social norms. We critically analyze existing research aimed at understanding, examining, and mitigating these ethical risks. Our survey underscores integrating ethical standards and societal values into the development of LLMs, thereby guiding the development of responsible and ethically aligned language models.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4745 - 4771"},"PeriodicalIF":0.0,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00797-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145121892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-07-29DOI: 10.1007/s43681-025-00798-2
Timothé Ménard, Katrina A Bramstedt
{"title":"Artificial intelligence agent in clinical trial operations: a fictional (for now) case study","authors":"Timothé Ménard, Katrina A Bramstedt","doi":"10.1007/s43681-025-00798-2","DOIUrl":"10.1007/s43681-025-00798-2","url":null,"abstract":"<div><p>AI agents are autonomous systems that catalyze drug development by processing vast data sets, modeling drug interactions, and optimizing synthesis protocols. Though not yet used in clinical trial operations, these agents could potentially manage data in electronic Case Report Forms (eCRFs), identifying anomalies, addressing basic issues, and creating reports—tasks that usually demand extensive human effort. Deploying AI agents in clinical trials could raise ethical concerns regarding autonomy, data privacy, bias, transparency, and accountability. Using a fictional use case, and building on ethical frameworks for biomedical research and on the Roche Data and AI Ethics Principles, the use of AI agents in clinical trials aims to balance efficiency with participant safety and rights, potentially hastening clinical research and the eventual approval of new treatments that could benefit patients and society while ensuring ethical integrity. This commentary explores ethical guardrails and risk mitigations.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4627 - 4633"},"PeriodicalIF":0.0,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145122553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-07-23DOI: 10.1007/s43681-025-00783-9
Xiaolan Wu, Hui Li
{"title":"A systematic review of AI anxiety in education","authors":"Xiaolan Wu, Hui Li","doi":"10.1007/s43681-025-00783-9","DOIUrl":"10.1007/s43681-025-00783-9","url":null,"abstract":"<div><p>This systematic study examines the occurrence of AI anxiety in educational settings, utilizing insights from 32 peer-reviewed articles obtained from the WOS and SCOPUS databases. The research consolidates existing knowledge into five topic categories: (1) Prevalence, conceptualization, and stakeholder differences of AI Anxiety; (2) AI Anxiety and its relationship with self-efficacy and behavioral intentions; (3) Impact of AI anxiety on learning behaviors and outcomes; (4) Demographic and contextual factors as moderators in AI anxiety; and (5) Mitigation strategies for AI anxiety, proposing intervention frameworks to alleviate AI-related concerns. This analysis highlights the complex nature of AI anxiety, its effects on educational practices, and the need for specific initiatives to promote favorable attitudes towards AI integration in learning settings. This study identifies essential gaps and future research topics, contributing to the creation of more inclusive and adaptive educational environments in the age of artificial intelligence.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4773 - 4787"},"PeriodicalIF":0.0,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145122325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-07-23DOI: 10.1007/s43681-025-00796-4
Ninell Oldenburg, Anders Søgaard
{"title":"Correction: Navigating the informativeness-compression trade-off in XAI","authors":"Ninell Oldenburg, Anders Søgaard","doi":"10.1007/s43681-025-00796-4","DOIUrl":"10.1007/s43681-025-00796-4","url":null,"abstract":"","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4943 - 4943"},"PeriodicalIF":0.0,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00796-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145122326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-07-21DOI: 10.1007/s43681-025-00786-6
Basil Hanafi, Devyaani Singh, Mohammad Ali
{"title":"Global research perspectives on privacy and human rights through a data-driven scientometric review analysis","authors":"Basil Hanafi, Devyaani Singh, Mohammad Ali","doi":"10.1007/s43681-025-00786-6","DOIUrl":"10.1007/s43681-025-00786-6","url":null,"abstract":"<div>\u0000 \u0000 <p>This study applies a scientometric analysis using RStudio (Bibliometrix/Biblioshiny) to map 764 publications on “Human Rights” and the “Right to Privacy” from Scopus and Web of Science (1972–2024). We first examine publication trends, revealing a 12% average annual growth—peaking between 2018 and 2022—and a concentration of output in North America and Europe alongside rising contributions from Asia and Africa. Then, we examine author and institutional influence via h-index, citation frequency, and co-authorship networks and determine a central network of frequent contributors (h-index ≥ 3) and dominant hubs like Leiden University and the International Journal of Human Rights. Thematic mapping reveals the movement away from root ideas (“human rights,” “confidentiality”) to more recent themes such as “health,” “data protection,” and “artificial intelligence.” At the same time, co-word networks pinpoint the rising clusters in ethics of surveillance and AI regulation. While all this has been accomplished, global cooperation is still below 3%, and there is no standardized legal regime regulating privacy during the digital era. We suggest focused funding on cross-regional collaborations, inter-disciplinary research cutting across law, ethics, and technology, and the creation of binding policy tools—like a world “Privacy Codex”—to fill regulatory blanks and protect individual autonomy in a more integrated world.</p>\u0000 </div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4713 - 4744"},"PeriodicalIF":0.0,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145122201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-07-15DOI: 10.1007/s43681-025-00778-6
Muhammad Salar Khan
{"title":"The AI arms race and global order: a U.S. policy imperative","authors":"Muhammad Salar Khan","doi":"10.1007/s43681-025-00778-6","DOIUrl":"10.1007/s43681-025-00778-6","url":null,"abstract":"<div><p>As AI reshapes global power dynamics, the U.S. must act decisively to balance innovation, governance, and security. Without strategic leadership, China’s expanding AI influence could set the rules for the future. Grounded in emerging scholarship, this Brief Communication proposes a policy agenda for the U.S., emphasizing inclusive governance, strategic investment, and multilateral cooperation. It draws on recent policy reports, academic literature, and international summit proceedings to synthesize a timely policy response.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4623 - 4625"},"PeriodicalIF":0.0,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00778-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145121986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-06-23DOI: 10.1007/s43681-025-00776-8
William J. W. Choi, Benjamin Ahn, Gyan Moorthy, Kimberly Do
{"title":"AI-assisted care for older adults: a review of practical and ethical areas of concern","authors":"William J. W. Choi, Benjamin Ahn, Gyan Moorthy, Kimberly Do","doi":"10.1007/s43681-025-00776-8","DOIUrl":"10.1007/s43681-025-00776-8","url":null,"abstract":"<div><p>With a rapidly aging population, the strain on current systems of care for adults over 65 is expected to intensify. However, the explosive growth in artificial intelligence (AI) clinical support systems promises to mitigate this challenge. While AI holds significant potential to enhance care for older adults, its increasing application in this uniquely vulnerable population raises critical ethical concerns that should be addressed through a robust understanding of both the state of AI technology and its ethical implications. We review the current state of AI technology as it relates to geriatrics in four primary domains of care: (1) medical functionality; (2) emotional companionship; (3) daily living assistance; (4) access to medical information. Building on this foundation, we propose an ethical framework that provides anticipatory guidance to address the ethical impacts of AI on geriatric care in different relational contexts between the AI product, its user(s) and humanity. In doing so, we aim to clarify the role of different stakeholders in the responsible development, implementation and regulation of AI technology for older adult care.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4681 - 4691"},"PeriodicalIF":0.0,"publicationDate":"2025-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145122299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI and ethicsPub Date : 2025-06-23DOI: 10.1007/s43681-025-00770-0
Michał Wieczorek, Mohammad Hosseini, Bert Gordijn
{"title":"Unpacking the ethics of using AI in primary and secondary education: a systematic literature review","authors":"Michał Wieczorek, Mohammad Hosseini, Bert Gordijn","doi":"10.1007/s43681-025-00770-0","DOIUrl":"10.1007/s43681-025-00770-0","url":null,"abstract":"<div><p>This paper provides a systematic review of the literature discussing the ethics of using artificial intelligence in primary and secondary education (AIPSED). Although recent advances in AI have led to increased interest in its use in education, discussions about the ethical implications of this new development are dispersed. Our literature review consolidates discussions that occurred in different epistemic communities interested in AIPSED and offers an ethical analysis of the debate. The review followed the PRISMA-Ethics guidelines and included 48 sources published between 2016 and 2023. Using a thematic approach, we subsumed the ethical implications of AIPSED under seventeen categories, with four outlining potential positive developments and thirteen identifying perceived negative consequences. We argue that empirical research and in-depth engagement with ethical theory and philosophy of education is needed to adequately assess the challenges introduced by AIPSED.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4693 - 4711"},"PeriodicalIF":0.0,"publicationDate":"2025-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12434897/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145076964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}