AI and ethics最新文献

筛选
英文 中文
Meaningful work in peril? Preserving self-efficacy in the age of artificial intelligence 有意义的工作岌岌可危?在人工智能时代保持自我效能感
AI and ethics Pub Date : 2025-04-22 DOI: 10.1007/s43681-025-00717-5
Helen M. E. Winter, A. Marco Turk, Sebastian F. Winter
{"title":"Meaningful work in peril? Preserving self-efficacy in the age of artificial intelligence","authors":"Helen M. E. Winter,&nbsp;A. Marco Turk,&nbsp;Sebastian F. Winter","doi":"10.1007/s43681-025-00717-5","DOIUrl":"10.1007/s43681-025-00717-5","url":null,"abstract":"<div><p>Artificial intelligence (AI) is rapidly transforming our socioeconomic landscape. Unprecedented advances in knowledge acquisition, human augmentation, and wealth creation are accompanied by equally unprecedented challenges from AI-driven job automation and displacement. This commentary explores AI’s impact on the experience of meaningful work as a function of self-efficacy—an individual’s belief in their capability to execute behaviors necessary to achieve specific goals. Specifically, we posit that <i>vocational self-efficacy</i>, cultivated through experiences of meaningful work, constitutes an essential component of individual and collective wellbeing and flourishing. We critically examine how evolving AI-labor relations may impact work significance, highlighting AI’s dual role as both an enabler and a threat to vocational self-efficacy. We conclude by outlining actionable recommendations for civil society, policymakers, and the private sector aimed at cultivating meaning and self-efficacy—both as an ethical imperative and a means to foster societal cohesion in the age of AI.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3445 - 3453"},"PeriodicalIF":0.0,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145144529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implementing ethical principles in AI: an initial discussion 在人工智能中执行伦理原则:初步讨论
AI and ethics Pub Date : 2025-04-15 DOI: 10.1007/s43681-025-00710-y
Mykhailo Danilevskyi, Fernando Perez-Tellez, Davide Buscaldi
{"title":"Implementing ethical principles in AI: an initial discussion","authors":"Mykhailo Danilevskyi,&nbsp;Fernando Perez-Tellez,&nbsp;Davide Buscaldi","doi":"10.1007/s43681-025-00710-y","DOIUrl":"10.1007/s43681-025-00710-y","url":null,"abstract":"<div><p>In recent years, there has been a lot of discussion around ethics in IT and AI. Many researchers and organisations have proposed guidelines to address privacy, fairness, and explainability challenges for creating trustworthy AI. In this paper, we discuss ethical principles in the context of AI and their significance in developing trustworthy AI solutions. We consider the problem of the categorisation of ethical principles in IT. We concentrate our discussion on privacy, fairness, and explainability. These principles, we believe, meaningfully contribute to the trust of AI systems. We overview the available privacy regulations in the EU and US. We also look at how to achieve compliance with them, including private data detection, data anonymisation techniques and toolkits. From a practical perspective, we analyse fairness and bias problems. We discuss the issue of fairness assessment and metrics. To improve the fairness of AI solutions, an enormous number of techniques have been developed. We also focus on fairness improvement techniques and a few popular toolkits in which these techniques are implemented. Explainability is another ethical principle discussed. It is one of many socially important properties, as it ensures understanding of AI system decision-making and transparency in inspection. Ensuring explainability is important for high-risk applications in healthcare, finance and criminal justice. Finally, we outline approaches that help in the level of explainability. With this review and analysis, we contribute to the knowledge of available techniques and toolkits that can be used by AI practitioners as an initial step in implementing ethical principles into AI solutions.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3549 - 3555"},"PeriodicalIF":0.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00710-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145165757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
When AI meets counterfactuals: the ethical implications of counterfactual world simulation models 当人工智能遇到反事实:反事实世界模拟模型的伦理含义
AI and ethics Pub Date : 2025-04-11 DOI: 10.1007/s43681-025-00718-4
Lara Kirfel, Robert J. MacCoun, Thomas Icard, Tobias Gerstenberg
{"title":"When AI meets counterfactuals: the ethical implications of counterfactual world simulation models","authors":"Lara Kirfel,&nbsp;Robert J. MacCoun,&nbsp;Thomas Icard,&nbsp;Tobias Gerstenberg","doi":"10.1007/s43681-025-00718-4","DOIUrl":"10.1007/s43681-025-00718-4","url":null,"abstract":"<div><p>This paper examines the transformative potential of AI embedded with counterfactual world simulation models (CWSMs). A CWSM uses multimodal evidence, such as the CCTV footage of a road accident, to build a high-fidelity 3D reconstruction of what happened. It can answer causal questions, such as whether the accident happened because the driver was speeding, by simulating what would have happened in relevant counterfactual situations. We sketch a normative and ethical framework that guides and constrains the simulation of counterfactuals. We address the challenge of ensuring fidelity in reconstructions while simultaneously preventing stereotype perpetuation during counterfactual simulations. We anticipate different modes of how users will interact with AI-powered CWSMs and discuss how their outputs may be presented. Finally, we address the prospective applications of CWSMs in the legal domain, recognizing both their potential to revolutionize legal proceedings as well as the ethical concerns they engender. Sketching a new genre of AI, this paper seeks to illuminate the path forward for responsible and effective use of CWSMs.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4593 - 4604"},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00718-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145121817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence tools: a potential for error-free scholarly communication in Nigerian universities 人工智能工具:尼日利亚大学无差错学术交流的潜力
AI and ethics Pub Date : 2025-04-10 DOI: 10.1007/s43681-025-00714-8
Peter-Ebuka Okafor, Abasiafak Ndifreke Udosen, Benson Ikechukwu Igboanugo
{"title":"Artificial intelligence tools: a potential for error-free scholarly communication in Nigerian universities","authors":"Peter-Ebuka Okafor,&nbsp;Abasiafak Ndifreke Udosen,&nbsp;Benson Ikechukwu Igboanugo","doi":"10.1007/s43681-025-00714-8","DOIUrl":"10.1007/s43681-025-00714-8","url":null,"abstract":"<div><p>Recently, efforts have been intensified in Nigerian universities to address errors and ethical issues in scholarly communications. However, despite significant efforts by the government and university administrators to resolve these challenges, most researchers still lack the expertise to effectively utilize advanced artificial intelligence (AI) tools that are essential for producing ethically sound and error-free scholarly articles. AI has speedily emerged as one of the most significant and transformative innovations of our time, with wide-ranging applications in almost every field of knowledge. Among these applications, research has arguably witnessed the fastest growth and adoption of AI-driven tools in developed nations. This study provides an in-depth review of the present applications of AI-driven tools in higher education in Nigeria, with a focus on universities, and explores the factors affecting innovative research output in universities in Nigeria. The findings from the articles reveal that several factors contribute to this, such as poor research sponsorship, limited access to online scholarly articles, unethical practices, unintentional errors, and brain drain. Benefits, implications, and strategies for improvement, along with a roadmap outlining the way forward, are suggested.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3537 - 3548"},"PeriodicalIF":0.0,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145164435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing hand-crafted and deep learning approaches for detecting AI-generated text: performance, generalization, and linguistic insights 比较手工制作和深度学习方法来检测人工智能生成的文本:性能、泛化和语言见解
AI and ethics Pub Date : 2025-04-10 DOI: 10.1007/s43681-025-00699-4
Ramtin Ardeshirifar
{"title":"Comparing hand-crafted and deep learning approaches for detecting AI-generated text: performance, generalization, and linguistic insights","authors":"Ramtin Ardeshirifar","doi":"10.1007/s43681-025-00699-4","DOIUrl":"10.1007/s43681-025-00699-4","url":null,"abstract":"<div><p>This study investigates techniques for detecting machine-generated text, a critical task in the era of advanced language models. We compare two approaches: a hand-crafted feature-based method and a deep learning method using RoBERTa. Experiments were conducted on diverse datasets, including the Human ChatGPT Comparison Corpus (HC3) and GPT-2 outputs. The hand-crafted approach achieved 94% F1 score on HC3 but struggled with cross-dataset generalization. In contrast, the RoBERTa-based method demonstrated superior performance and adaptability, achieving 98% F1 score on HC3 and 97.68% on GPT-2. Our findings underscore the need for adaptive detection methods as language models evolve. This research contributes to the development of robust techniques for identifying AI-generated content, addressing critical challenges in AI ethics and responsible technology use.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"4197 - 4209"},"PeriodicalIF":0.0,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00699-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145164003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human trust in AI: a relationship beyond reliance 人类对AI的信任:一种超越依赖的关系
AI and ethics Pub Date : 2025-04-09 DOI: 10.1007/s43681-025-00690-z
Sara Blanco
{"title":"Human trust in AI: a relationship beyond reliance","authors":"Sara Blanco","doi":"10.1007/s43681-025-00690-z","DOIUrl":"10.1007/s43681-025-00690-z","url":null,"abstract":"<div><p>Trust in artificial intelligence (AI) is often discussed by both the general public and a part of academia. The discourse on trust in AI is often presented as analogous to trust in people. However, it is unclear whether the concept of trust can suitably be extended to describe relationships between humans and other entities. In this article, I will argue that the main features of trusting relationships apply both when the trustee is a human or an AI system. This view is opposed to the claim that only humans can be trusted and that technology, at its best, can be just relied on. However, it is commonly accepted that reliance has weaker implications than trust. We often rely on those whom we need or want to do something for us, regardless of their motivation to act. I will argue that motivation is relevant for trust, both in humans and in AI. Because of this, I propose trust as a suitable goal to aim for when shaping human-AI relationships.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"4167 - 4180"},"PeriodicalIF":0.0,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00690-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145163023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The thinness of GenAI: body size in relation to the construction of the normate through GenAI image models GenAI的瘦度:身体大小与通过GenAI图像模型构建规范的关系
AI and ethics Pub Date : 2025-04-09 DOI: 10.1007/s43681-025-00684-x
Aisha Sobey
{"title":"The thinness of GenAI: body size in relation to the construction of the normate through GenAI image models","authors":"Aisha Sobey","doi":"10.1007/s43681-025-00684-x","DOIUrl":"10.1007/s43681-025-00684-x","url":null,"abstract":"<div><p>While generative AI (genAI) image models are increasingly popular, they are not without critique for their biased outputs. Building on assessments of Dall-E’s prejudiced and homogenising production of race, this paper seeks to understand how fat bodies are presented compared to straight-size bodies in 649 images created by nine different, free-to-use genAI image models. The images are examined through critical visual analysis and reflexive thematic analysis. In the first instance, auditing highlights that, if not explicitly prompted to show larger bodies, none of the models create fatness or disability. Secondly, in the outputs with a larger body-size prompt, the models produced images which contravened their own content guidelines, showed fewer positive facial expressions, and had higher rates of mistakes and anomalies compared to images without a body-size prompt. This paper argues that the social imaginaries created through genAI images are foreclosing on difference and forming new normate standards of personhood, which explicitly exclude people who exist in socially deviant bodies.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"4181 - 4196"},"PeriodicalIF":0.0,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00684-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145163738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing the role of artificial intelligence in quality of legal education with special reference to sustainable development goal 评估人工智能在可持续发展目标下法律教育质量中的作用
AI and ethics Pub Date : 2025-04-09 DOI: 10.1007/s43681-025-00700-0
Prashant Kumar Varun
{"title":"Assessing the role of artificial intelligence in quality of legal education with special reference to sustainable development goal","authors":"Prashant Kumar Varun","doi":"10.1007/s43681-025-00700-0","DOIUrl":"10.1007/s43681-025-00700-0","url":null,"abstract":"<div><p>The transformational potential of artificial intelligence (AI) has been increasingly apparent in a variety of fields, including legal education. This study investigates the potential of AI in transforming legal education in order to meet the sustainable development goals (SDGs), 2030. As nations and cultures global struggle to achieve the SDGs, legal professionals play an important role in establishing legal frameworks that support progress toward these goals. It can improve the delivery of legal curricula by providing customized learning experiences targeted to individuals. AI can also help educators to create current content and enhance the evaluation. Education is now more dynamic, accessible, and affordable, expanding its reach. Quality of legal education is a cornerstone of achieving the SDGs set for 2030. The effective utilization of AI in the field of legal education holds significant promise in not only expanding access but also improving the quality of legal education. AI-powered legal research tools have the potential to revolutionize legal research for law students, researchers, and legal practitioners. They can provide more efficient and comprehensive ways to explore legal precedents, case law, and statutes, thus empowering legal professionals to perform their duties with greater accuracy and efficiency. By improving access to legal information, these tools can play a significant role in the formulation of laws and regulations that align with the SDGs. This research paper explores the multifaceted ways in which AI can contribute to advancing legal education and, consequently, supporting the attainment of the SDGs.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3609 - 3615"},"PeriodicalIF":0.0,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145163714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconceptualizing inventorship: addressing the challenge of artificial intelligence-generated inventions 重新定义发明:应对人工智能产生的发明的挑战
AI and ethics Pub Date : 2025-04-09 DOI: 10.1007/s43681-025-00709-5
Ricardo Villegas Tovar, David Pinto Avendaño
{"title":"Reconceptualizing inventorship: addressing the challenge of artificial intelligence-generated inventions","authors":"Ricardo Villegas Tovar,&nbsp;David Pinto Avendaño","doi":"10.1007/s43681-025-00709-5","DOIUrl":"10.1007/s43681-025-00709-5","url":null,"abstract":"<div><p>The emergence of Artificial Intelligence (AI) has brought significant advancements in various fields, including the generation of inventions. However, this progress has raised complex questions regarding inventorship, particularly in cases where AI systems autonomously generate inventions. This paper explores the question of whether artificial intelligence (AI) entities can be recognized as inventors of patentable inventions. It discusses the challenges of AI-generated inventions and proposes updated patent examination criteria and guidelines. The European Patent Office and the United States Patent and Trademark Office have recently updated their guidelines to affirm the inventive capacity of individuals while considering the role of AI as a support tool in the examination process. The paper concludes by suggesting the consideration of a sui generis form of protection for AI-generated inventions, similar to concepts in intellectual property law, but argues against granting legal personality and assets to AI. Instead, it advocates for updating patent examination criteria to accommodate AI’s role in the inventive process.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4475 - 4479"},"PeriodicalIF":0.0,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00709-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145121663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using large Language models as a road map for establishing core values in a legal vacuum 使用大型语言模型作为在法律真空中建立核心价值的路线图
AI and ethics Pub Date : 2025-04-09 DOI: 10.1007/s43681-025-00706-8
Theo Theunissen, Lydia Duijvestijn
{"title":"Using large Language models as a road map for establishing core values in a legal vacuum","authors":"Theo Theunissen,&nbsp;Lydia Duijvestijn","doi":"10.1007/s43681-025-00706-8","DOIUrl":"10.1007/s43681-025-00706-8","url":null,"abstract":"<div><p>This approach offers a structured method for organizations to develop and articulate their ethical frameworks, particularly in areas where legal guidance is limited or nonexistent. Problem: This study investigates establishing core values in a legal vacuum, where research, design, or implementation of an invention or innovation is feasible but lacks regulations. We leverage Large Language Models (LLMs) to analyze codes of conduct from 1000 organizations (profit and not-for-profit) to identify core values. Metrics such as accuracy, bias, completeness, consistency, and relevance are used to validate the performance of LLMs in this context. From 493 non-profit organizations and companies on the Fortune 500 list, a total of 8646 core values including variations across 89 sectors were found. Using accuracy, bias, completeness, consistency and relevance as metrics for evaluating result from the LMMs, the number of core values is reduced to 362. The research employs a ten-step decision-making process to guide ethical decision-making when clear rules, laws, or regulations are absent. The framework presents how objectivity can be maintained without losing personal values. This research contributes to understanding how core values are established and applied in the absence of formal regulations.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"4147 - 4166"},"PeriodicalIF":0.0,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00706-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145163737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信