AI and ethics最新文献

筛选
英文 中文
Addressing corrigibility in near-future AI systems 解决近未来人工智能系统中的可重复性问题
AI and ethics Pub Date : 2024-05-16 DOI: 10.1007/s43681-024-00484-9
Erez Firt
{"title":"Addressing corrigibility in near-future AI systems","authors":"Erez Firt","doi":"10.1007/s43681-024-00484-9","DOIUrl":"10.1007/s43681-024-00484-9","url":null,"abstract":"<div><p>When we discuss future advanced autonomous AI systems, one of the worries is that these systems will be capable enough to resist external intervention, even when such intervention is crucial, for example, when the system is not behaving as intended. The rationale behind such worries is that such intelligent systems will be motivated to resist attempts to modify or shut them down so they can preserve their objectives. To mitigate and face these worries, we want our future systems to be corrigible, i.e., to tolerate, cooperate or assist many forms of outside correction. One important reason for considering corrigibility as an important safety property is that we already know how hard it is to construct AI agents with a generalized enough utility function; and the more advanced and capable the agent is, the more it is unlikely that a complex baseline utility function built into it will be perfect from the start. In this paper, we try to achieve corrigibility in (at least) systems based on known or near-future (imaginable) technology, by endorsing and integrating different approaches to building AI-based systems. Our proposal replaces the attempts to provide a corrigible utility function with the proposed corrigible software architecture; this takes the agency off the RL agent – which now becomes an RL solver – and grants it to the system as a whole.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 2","pages":"1481 - 1490"},"PeriodicalIF":0.0,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00484-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140968105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assuring AI safety: fallible knowledge and the Gricean maxims 确保人工智能安全:易错知识与格莱斯格言
AI and ethics Pub Date : 2024-05-15 DOI: 10.1007/s43681-024-00490-x
Marten H. L. Kaas, Ibrahim Habli
{"title":"Assuring AI safety: fallible knowledge and the Gricean maxims","authors":"Marten H. L. Kaas,&nbsp;Ibrahim Habli","doi":"10.1007/s43681-024-00490-x","DOIUrl":"10.1007/s43681-024-00490-x","url":null,"abstract":"<div><p>In this paper we argue that safety claims, when justified by a safety case, are descriptive fallible knowledge claims. Even if the aim of a safety case was to justify infallible knowledge about the safety of a system, such infallible safety knowledge is impossible to attain in the case of AI-enabled systems. By their nature AI-enabled systems preclude the possibility of obtaining infallible knowledge concerning their safety or lack thereof. We suggest that one can communicate knowledge of an AI-enabled system’s safety by structuring their exchange according to Paul Grice’s Cooperative Principle which can be achieved via adherence to the Gricean maxims of communication. Furthermore, these same maxims can be used to evaluate the calibre of the exchange, with the aim being to ensure that communicating knowledge about an AI-enabled system’s safety is of the highest calibre, in short, that the communication is relevant, of sufficient quantity and quality, and communicated perspicuously. The high calibre communication of safety claims to an epistemically diverse group of stakeholders is vitally important given the increasingly participatory nature of AI-enabled system design, development and assessment.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 2","pages":"1467 - 1480"},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00490-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140973240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The prospects of using AI in euthanasia and physician-assisted suicide: a legal exploration 在安乐死和医生协助自杀中使用人工智能的前景:法律探讨
AI and ethics Pub Date : 2024-05-15 DOI: 10.1007/s43681-024-00491-w
Hannah van Kolfschooten
{"title":"The prospects of using AI in euthanasia and physician-assisted suicide: a legal exploration","authors":"Hannah van Kolfschooten","doi":"10.1007/s43681-024-00491-w","DOIUrl":"10.1007/s43681-024-00491-w","url":null,"abstract":"<div><p>The Netherlands was the first country to legalize euthanasia and physician-assisted suicide. This paper offers a first legal perspective on the prospects of using AI in the Dutch practice of euthanasia and physician-assisted suicide. It responds to the Regional Euthanasia Review Committees’ interest in exploring technological solutions to improve current procedures. The specific characteristics of AI – the capability to process enormous amounts of data in a short amount of time and generate new insights in individual cases – may for example alleviate the increased workload of review committees due to the continuous increase of euthanasia cases. The paper considers three broad categories for the use of AI in the Dutch euthanasia practice: (1) the physician’s assessment of euthanasia requests, (2) the actual execution of euthanasia, and (3) the retrospective reviews of cases by the Regional Euthanasia Review Committees. Exploring the legal considerations around each avenue, both in the EU AI Act and the Dutch legal framework, this paper aims to facilitate the societal discussion on the role of technology in such deeply human decisions. This debate is equally relevant to other countries that legalized euthanasia (e.g. Belgium and Canada) or physician-assisted suicide (e.g. Switzerland and numerous states in the US).</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 2","pages":"1461 - 1466"},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00491-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140974619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Safeguarding human values: rethinking US law for generative AI’s societal impacts 保护人类价值:重新思考美国法律以应对人工智能的社会影响
AI and ethics Pub Date : 2024-05-07 DOI: 10.1007/s43681-024-00451-4
Inyoung Cheong, Aylin Caliskan, Tadayoshi Kohno
{"title":"Safeguarding human values: rethinking US law for generative AI’s societal impacts","authors":"Inyoung Cheong,&nbsp;Aylin Caliskan,&nbsp;Tadayoshi Kohno","doi":"10.1007/s43681-024-00451-4","DOIUrl":"10.1007/s43681-024-00451-4","url":null,"abstract":"<div><p>Our interdisciplinary study examines the effectiveness of US law in addressing the complex challenges posed by generative AI systems to fundamental human values, including physical and mental well-being, privacy, autonomy, diversity, and equity. Through the analysis of diverse hypothetical scenarios developed in collaboration with experts, we identified significant shortcomings and ambiguities within the existing legal protections. Constitutional and civil rights law currently struggles to hold AI companies responsible for AI-assisted discriminatory outputs. Moreover, even without considering the liability shield provided by Section 230, existing liability laws may not effectively remedy unintentional and intangible harms caused by AI systems. Demonstrating causal links for liability claims such as defamation or product liability proves exceptionally difficult due to the intricate and opaque nature of these systems. To effectively address these unique and evolving risks posed by generative AI, we propose a “Responsible AI Legal Framework” that adapts to recognize new threats and utilizes a multi-pronged approach. This framework would enshrine fundamental values in legal frameworks, establish comprehensive safety guidelines, and implement liability models tailored to the complexities of human-AI interactions. By proactively mitigating unforeseen harms like mental health impacts and privacy breaches, this framework aims to create a legal landscape capable of navigating the exciting yet precarious future brought forth by generative AI technologies.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 2","pages":"1433 - 1459"},"PeriodicalIF":0.0,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00451-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141002893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A global scale comparison of risk aggregation in AI assessment frameworks 对人工智能评估框架中的风险汇总进行全球范围的比较
AI and ethics Pub Date : 2024-05-06 DOI: 10.1007/s43681-024-00479-6
Anna Schmitz, Michael Mock, Rebekka Görge, Armin B. Cremers, Maximilian Poretschkin
{"title":"A global scale comparison of risk aggregation in AI assessment frameworks","authors":"Anna Schmitz,&nbsp;Michael Mock,&nbsp;Rebekka Görge,&nbsp;Armin B. Cremers,&nbsp;Maximilian Poretschkin","doi":"10.1007/s43681-024-00479-6","DOIUrl":"10.1007/s43681-024-00479-6","url":null,"abstract":"<div><p>AI applications bear inherent risks in various risk dimensions, such as insufficient reliability, robustness, fairness or data protection. It is well-known that trade-offs between these dimensions can arise, for example, a highly accurate AI application may reflect unfairness and bias of the real-world data, or may provide hard-to-explain outcomes because of its internal complexity. AI risk assessment frameworks aim to provide systematic approaches to risk assessment in various dimensions. The overall trustworthiness assessment is then generated by some form of risk aggregation among the risk dimensions. This paper provides a systematic overview on risk aggregation schemes used in existing AI risk assessment frameworks, focusing on the question how potential trade-offs among the risk dimensions are incorporated. To this end, we examine how the general risk notion, the application context, the extent of risk quantification, and specific instructions for evaluation may influence overall risk aggregation. We discuss our findings in the current frameworks in terms of whether they provide meaningful and practicable guidance. Lastly, we derive recommendations for the further operationalization of risk aggregation both from horizontal and vertical perspectives.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 2","pages":"1407 - 1432"},"PeriodicalIF":0.0,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00479-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141008242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive study on navigating neuroethics in Cyberspace 网络空间神经伦理导航综合研究
AI and ethics Pub Date : 2024-05-02 DOI: 10.1007/s43681-024-00486-7
Ms. Kritika
{"title":"A comprehensive study on navigating neuroethics in Cyberspace","authors":"Ms. Kritika","doi":"10.1007/s43681-024-00486-7","DOIUrl":"10.1007/s43681-024-00486-7","url":null,"abstract":"<div><p>The booming landscape of multidisciplinary studies, namely, neuroscience, ethics and cyber security brings into focus the emerging need of developing ethical standards for neural data to the implemented safely in the domain of cyberspace. The synergy between neuroscience and cybersecurity emphasizes the transformative potential of technologies like BCI, EEG, FMRI, MEG etc. highlighting the ethical imperative to bring to light the issues of privacy, autonomy, individual’s right, and security of their neural data. The paper delves into the question of delicacy of neuro data as an emerging concern for cyber professionals as well as individuals to safeguard from the emerging threats of phishing, brain jacking, vishing and implementing proper guidelines and framework to have informed consent before going ahead with their confidential data which can otherwise be misused at the hands of cybercriminals.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"93 - 100"},"PeriodicalIF":0.0,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141019011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI for all: Diversity and Inclusion in AI 人人享有人工智能:人工智能领域的多样性和包容性
AI and ethics Pub Date : 2024-05-02 DOI: 10.1007/s43681-024-00485-8
Didar Zowghi, Muneera Bano
{"title":"AI for all: Diversity and Inclusion in AI","authors":"Didar Zowghi,&nbsp;Muneera Bano","doi":"10.1007/s43681-024-00485-8","DOIUrl":"10.1007/s43681-024-00485-8","url":null,"abstract":"","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 4","pages":"873 - 876"},"PeriodicalIF":0.0,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141021870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressing diversity in hiring procedures: a generative adversarial network approach 解决招聘程序中的多样性问题:生成式对抗网络方法
AI and ethics Pub Date : 2024-05-02 DOI: 10.1007/s43681-024-00445-2
Tales Marra, Emeric Kubiak
{"title":"Addressing diversity in hiring procedures: a generative adversarial network approach","authors":"Tales Marra,&nbsp;Emeric Kubiak","doi":"10.1007/s43681-024-00445-2","DOIUrl":"10.1007/s43681-024-00445-2","url":null,"abstract":"<div><p>The combination of machine learning and organizational psychology has led to innovative methods to address the diversity-validity dilemma in personnel selection, which is the tradeoff between selecting valid predictors of job performance while minimizing adverse impact. Recent technological advancements provide new strategies to mitigate gender biases while preserving the ability to predict job performance accurately. Our research introduces a novel framework consisting of three blocks: a gating block to filter user data, a bias measurement block using an adversarial network for detecting gender bias, and a feature importance block, identifying and removing biased, non-contributory performance features. We applied this model architecture to both simulated datasets and real-world hiring scenarios, with a particular emphasis on personality-based algorithms, aiming to refine the hiring predictive models to be gender fair and to meet the EEOC standards. In simulated environments, 70% of the predictive models get their impact ratio improved, approaching the ideal ratio by 22.73% while only incurring a slight 4.16% decrease in performance predictability. Real-world data testing yielded similar improvements, with 71% of the models showing an increased impact ratio, 18.8% closer to the ideal, and a 2.18% increase in predictive accuracy for job performance. The findings suggest that the application of neural networks can be an effective strategy for enhancing fairness in hiring practices with only minimal loss in predictive accuracy. Future research directions should explore the refinement of these models and the implications of their deployment in high-stakes hiring environments.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 2","pages":"1381 - 1405"},"PeriodicalIF":0.0,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141022580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A semi-automated software model to support AI ethics compliance assessment of an AI system guided by ethical principles of AI 半自动化软件模型,支持以人工智能伦理原则为指导对人工智能系统进行人工智能伦理合规性评估
AI and ethics Pub Date : 2024-05-02 DOI: 10.1007/s43681-024-00480-z
Maria Assunta Cappelli, Giovanna Di Marzo Serugendo
{"title":"A semi-automated software model to support AI ethics compliance assessment of an AI system guided by ethical principles of AI","authors":"Maria Assunta Cappelli,&nbsp;Giovanna Di Marzo Serugendo","doi":"10.1007/s43681-024-00480-z","DOIUrl":"10.1007/s43681-024-00480-z","url":null,"abstract":"<div><p>Compliance with principles and guidelines for ethical AI has a significant impact on companies engaged in the development of artificial intelligence (AI) systems. Specifically, ethics is a broad concept that continuously evolves over time and across cultural and geographical boundaries. International organisations (IOs), individual states, and private groups, all have an interest in defining the concept of ethics of AI. IOs, as well as regional and national bodies, have issued many decisions on AI ethics. Developing a system that complies with the ethical framework poses a complex challenge for companies, and the consequences of not complying with ethical principles can have severe consequences, making compliance with these requirements a key issue for companies. Furthermore, there is a shortage of technical tools to ensure that such AI systems comply with ethical criteria. The scarcity of ethics compliance checking tools for AI, and the current focus on defining ethical guidelines for AI development, has led us to undertake a proposal consisting in a semi-automated software model to verify the ethical compliance of an AI system’s code. To implement this model, we focus on the following important aspects: (1) a literature review to identify existing ethical compliance systems, (2) a review of principles and guidelines for ethical AI to determine the international and European views regarding AI ethics, and (3) the identification of commonly accepted principles and sub-principles of AI. These elements served to inform (4) our proposal for the design of a semi-automated software for verifying the ethical compliance of AI systems both at design-time (ethics-by-design perspective) and afterwards on the resulting software.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 2","pages":"1357 - 1380"},"PeriodicalIF":0.0,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00480-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141021577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ACESOR: a critical engagement in systems of oppression AI assessment tool ACESOR:一个关键的参与压迫系统的人工智能评估工具
AI and ethics Pub Date : 2024-04-29 DOI: 10.1007/s43681-024-00478-7
Zari McFadden
{"title":"ACESOR: a critical engagement in systems of oppression AI assessment tool","authors":"Zari McFadden","doi":"10.1007/s43681-024-00478-7","DOIUrl":"10.1007/s43681-024-00478-7","url":null,"abstract":"<div><p>The subarea of AI ethics and fairness research has done a lot of broad and far reaching research on the impact of AI on society. Unfortunately, much of this work has not included critical engagement with systems of oppression, limiting the understanding of why AI has the impacts it does. This paper introduces the Assessment of Critical Engagement in Systems of Oppression in Research (ACESOR) rubric as an assessment tool that can help researchers bridge this gap by providing guided critical engagement. Interviews were also conducted with experts who engage with systems of oppression in their work to gather feedback about the field’s current state, barriers to critical engagement, and opinions about the rubric and its use. Based on expert input, the field overall is doing great work, but more needs to be done to increase critical engagement, with some changes needing to come from the systemic level. The rubric is a valuable tool for researchers and practitioners, but it is not a single solution. This paper introduces the ACESOR rubric, highlights expert feedback, and provides an example of how the rubric could be used with the goal that the rubric as a tool will push the field forward toward more critical engagement.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 2","pages":"1329 - 1355"},"PeriodicalIF":0.0,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143913941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信