Minds and Machines最新文献

筛选
英文 中文
Mapping the Ethics of Generative AI: A Comprehensive Scoping Review 绘制生成式人工智能的伦理地图:全面范围审查
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-09-17 DOI: 10.1007/s11023-024-09694-w
Thilo Hagendorff
{"title":"Mapping the Ethics of Generative AI: A Comprehensive Scoping Review","authors":"Thilo Hagendorff","doi":"10.1007/s11023-024-09694-w","DOIUrl":"https://doi.org/10.1007/s11023-024-09694-w","url":null,"abstract":"<p>The advent of generative artificial intelligence and the widespread adoption of it in society engendered intensive debates about its ethical implications and risks. These risks often differ from those associated with traditional discriminative machine learning. To synthesize the recent discourse and map its normative concepts, we conducted a scoping review on the ethics of generative artificial intelligence, including especially large language models and text-to-image models. Our analysis provides a taxonomy of 378 normative issues in 19 topic areas and ranks them according to their prevalence in the literature. The study offers a comprehensive overview for scholars, practitioners, or policymakers, condensing the ethical debates surrounding fairness, safety, harmful content, hallucinations, privacy, interaction risks, security, alignment, societal impacts, and others. We discuss the results, evaluate imbalances in the literature, and explore unsubstantiated risk scenarios.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Justifiable Investment in AI for Healthcare: Aligning Ambition with Reality 对医疗保健领域人工智能的合理投资:将雄心与现实相结合
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-09-11 DOI: 10.1007/s11023-024-09692-y
Kassandra Karpathakis, Jessica Morley, Luciano Floridi
{"title":"A Justifiable Investment in AI for Healthcare: Aligning Ambition with Reality","authors":"Kassandra Karpathakis, Jessica Morley, Luciano Floridi","doi":"10.1007/s11023-024-09692-y","DOIUrl":"https://doi.org/10.1007/s11023-024-09692-y","url":null,"abstract":"<p>Healthcare systems are grappling with critical challenges, including chronic diseases in aging populations, unprecedented health care staffing shortages and turnover, scarce resources, unprecedented demands and wait times, escalating healthcare expenditure, and declining health outcomes. As a result, policymakers and healthcare executives are investing in artificial intelligence (AI) solutions to increase operational efficiency, lower health care costs, and improve patient care. However, current level of investment in developing healthcare AI among members of the global digital health partnership does not seem to yield a high return yet. This is mainly due to underinvestment in the supporting infrastructure necessary to enable the successful implementation of AI. If a healthcare-specific AI winter is to be avoided, it is paramount that this disparity in the level of investment in the development of AI itself and in the development of the necessary supporting system components is evened out.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
fl-IRT-ing with Psychometrics to Improve NLP Bias Measurement fl-IRT与心理测量学相结合,改善NLP偏差测量
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-09-04 DOI: 10.1007/s11023-024-09695-9
Dominik Bachmann, Oskar van der Wal, Edita Chvojka, Willem H. Zuidema, Leendert van Maanen, Katrin Schulz
{"title":"fl-IRT-ing with Psychometrics to Improve NLP Bias Measurement","authors":"Dominik Bachmann, Oskar van der Wal, Edita Chvojka, Willem H. Zuidema, Leendert van Maanen, Katrin Schulz","doi":"10.1007/s11023-024-09695-9","DOIUrl":"https://doi.org/10.1007/s11023-024-09695-9","url":null,"abstract":"<p>To prevent ordinary people from being harmed by natural language processing (NLP) technology, finding ways to measure the extent to which a language model is biased (e.g., regarding gender) has become an active area of research. One popular class of NLP bias measures are bias benchmark datasets—collections of test items that are meant to assess a language model’s preference for stereotypical versus non-stereotypical language. In this paper, we argue that such bias benchmarks should be assessed with models from the psychometric framework of item response theory (IRT). Specifically, we tie an introduction to basic IRT concepts and models with a discussion of how they could be relevant to the evaluation, interpretation and improvement of bias benchmark datasets. Regarding evaluation, IRT provides us with methodological tools for assessing the quality of both individual test items (e.g., the extent to which an item can differentiate highly biased from less biased language models) as well as benchmarks as a whole (e.g., the extent to which the benchmark allows us to assess not only severe but also subtle levels of model bias). Through such diagnostic tools, the quality of benchmark datasets could be improved, for example by deleting or reworking poorly performing items. Finally, in regards to interpretation, we argue that IRT models’ estimates for language model bias are conceptually superior to traditional accuracy-based evaluation metrics, as the former take into account more information than just whether or not a language model provided a biased response.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence for the Internal Democracy of Political Parties 人工智能促进政党内部民主
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-09-04 DOI: 10.1007/s11023-024-09693-x
Claudio Novelli, Giuliano Formisano, Prathm Juneja, Giulia Sandri, Luciano Floridi
{"title":"Artificial Intelligence for the Internal Democracy of Political Parties","authors":"Claudio Novelli, Giuliano Formisano, Prathm Juneja, Giulia Sandri, Luciano Floridi","doi":"10.1007/s11023-024-09693-x","DOIUrl":"https://doi.org/10.1007/s11023-024-09693-x","url":null,"abstract":"<p>The article argues that AI can enhance the measurement and implementation of democratic processes within political parties, known as Intra-Party Democracy (IPD). It identifies the limitations of traditional methods for measuring IPD, which often rely on formal parameters, self-reported data, and tools like surveys. Such limitations lead to partial data collection, rare updates, and significant resource demands. To address these issues, the article suggests that specific data management and Machine Learning techniques, such as natural language processing and sentiment analysis, can improve the measurement and practice of IPD.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Causal Analysis of Harm 危害的因果分析
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-07-21 DOI: 10.1007/s11023-024-09689-7
Sander Beckers, Hana Chockler, Joseph Y. Halpern
{"title":"A Causal Analysis of Harm","authors":"Sander Beckers, Hana Chockler, Joseph Y. Halpern","doi":"10.1007/s11023-024-09689-7","DOIUrl":"https://doi.org/10.1007/s11023-024-09689-7","url":null,"abstract":"<p>As autonomous systems rapidly become ubiquitous, there is a growing need for a legal and regulatory framework that addresses when and how such a system harms someone. There have been several attempts within the philosophy literature to define harm, but none of them has proven capable of dealing with the many examples that have been presented, leading some to suggest that the notion of harm should be abandoned and “replaced by more well-behaved notions”. As harm is generally something that is caused, most of these definitions have involved causality at some level. Yet surprisingly, none of them makes use of causal models and the definitions of actual causality that they can express. In this paper, which is an expanded version of the conference paper Beckers et al. (Adv Neural Inform Process Syst 35:2365–2376, 2022), we formally define a qualitative notion of harm that uses causal models and is based on a well-known definition of actual causality. The key features of our definition are that it is based on <i>contrastive</i> causation and uses a default utility to which the utility of actual outcomes is compared. We show that our definition is able to handle the examples from the literature, and illustrate its importance for reasoning about situations involving autonomous systems.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141744714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scientific Inference with Interpretable Machine Learning: Analyzing Models to Learn About Real-World Phenomena 利用可解释的机器学习进行科学推断:分析模型以了解真实世界的现象
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-07-15 DOI: 10.1007/s11023-024-09691-z
Timo Freiesleben, Gunnar König, Christoph Molnar, Álvaro Tejero-Cantero
{"title":"Scientific Inference with Interpretable Machine Learning: Analyzing Models to Learn About Real-World Phenomena","authors":"Timo Freiesleben, Gunnar König, Christoph Molnar, Álvaro Tejero-Cantero","doi":"10.1007/s11023-024-09691-z","DOIUrl":"https://doi.org/10.1007/s11023-024-09691-z","url":null,"abstract":"<p>To learn about real world phenomena, scientists have traditionally used models with clearly interpretable elements. However, modern machine learning (ML) models, while powerful predictors, lack this direct elementwise interpretability (e.g. neural network weights). Interpretable machine learning (IML) offers a solution by analyzing models holistically to derive interpretations. Yet, current IML research is focused on auditing ML models rather than leveraging them for scientific inference. Our work bridges this gap, presenting a framework for designing IML methods—termed ’property descriptors’—that illuminate not just the model, but also the phenomenon it represents. We demonstrate that property descriptors, grounded in statistical learning theory, can effectively reveal relevant properties of the joint probability distribution of the observational data. We identify existing IML methods suited for scientific inference and provide a guide for developing new descriptors with quantified epistemic uncertainty. Our framework empowers scientists to harness ML models for inference, and provides directions for future IML research to support scientific understanding.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141721767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Submarine Cables and the Risks to Digital Sovereignty 海底电缆和数字主权面临的风险
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-07-08 DOI: 10.1007/s11023-024-09683-z
Abra Ganz, Martina Camellini, Emmie Hine, Claudio Novelli, Huw Roberts, Luciano Floridi
{"title":"Submarine Cables and the Risks to Digital Sovereignty","authors":"Abra Ganz, Martina Camellini, Emmie Hine, Claudio Novelli, Huw Roberts, Luciano Floridi","doi":"10.1007/s11023-024-09683-z","DOIUrl":"https://doi.org/10.1007/s11023-024-09683-z","url":null,"abstract":"<p>The international network of submarine cables plays a crucial role in facilitating global telecommunications connectivity, carrying over 99% of all internet traffic. However, submarine cables challenge digital sovereignty due to their ownership structure, cross-jurisdictional nature, and vulnerabilities to malicious actors. In this article, we assess these challenges, current policy initiatives designed to mitigate them, and the limitations of these initiatives. The nature of submarine cables curtails a state’s ability to regulate the infrastructure on which it relies, reduces its data security, and threatens its ability to provide telecommunication services. States currently address these challenges through regulatory controls over submarine cables and associated companies, investing in the development of additional cable infrastructure, and implementing physical protection measures for the cables themselves. Despite these efforts, the effectiveness of current mechanisms is hindered by significant obstacles arising from technical limitations and a lack of international coordination on regulation. We conclude by noting how these obstacles lead to gaps in states’ policies and point towards how they could be improved to create a proactive approach to submarine cable governance that defends states’ digital sovereignty.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141566781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Experts or Authorities? The Strange Case of the Presumed Epistemic Superiority of Artificial Intelligence Systems 专家还是权威?假定人工智能系统具有认识论优越性的奇特案例
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-07-06 DOI: 10.1007/s11023-024-09681-1
Andrea Ferrario, Alessandro Facchini, Alberto Termine
{"title":"Experts or Authorities? The Strange Case of the Presumed Epistemic Superiority of Artificial Intelligence Systems","authors":"Andrea Ferrario, Alessandro Facchini, Alberto Termine","doi":"10.1007/s11023-024-09681-1","DOIUrl":"https://doi.org/10.1007/s11023-024-09681-1","url":null,"abstract":"<p>The high predictive accuracy of contemporary machine learning-based AI systems has led some scholars to argue that, in certain cases, we should grant them epistemic expertise and authority over humans. This approach suggests that humans would have the epistemic obligation of relying on the predictions of a highly accurate AI system. Contrary to this view, in this work we claim that it is not possible to endow AI systems with a genuine account of epistemic expertise. In fact, relying on accounts of expertise and authority from virtue epistemology, we show that epistemic expertise requires a relation with understanding that AI systems do not satisfy and intellectual abilities that these systems do not manifest. Further, following the Distribution Cognition theory and adapting an account by Croce on the virtues of collective epistemic agents to the case of human-AI interactions we show that, if an AI system is successfully appropriated by a human agent, a <i>hybrid</i> epistemic agent emerges, which can become both an epistemic expert and an authority. Consequently, we claim that the aforementioned hybrid agent is the appropriate object of a discourse around trust in AI and the epistemic obligations that stem from its epistemic superiority.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141566959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measure for Measure: Operationalising Cognitive Realism 度量衡:认知现实主义的操作化
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-07-05 DOI: 10.1007/s11023-024-09690-0
Majid D. Beni
{"title":"Measure for Measure: Operationalising Cognitive Realism","authors":"Majid D. Beni","doi":"10.1007/s11023-024-09690-0","DOIUrl":"https://doi.org/10.1007/s11023-024-09690-0","url":null,"abstract":"<p>This paper develops a measure of realism from within the framework of cognitive structural realism (CSR). It argues that in the context of CSR, realism can be operationalised in terms of balance between accuracy and generality. More specifically, the paper draws on the free energy principle to characterise the measure of realism in terms of the balance between accuracy and generality.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141567032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unfairness in AI Anti-Corruption Tools: Main Drivers and Consequences 人工智能反腐败工具的不公正性:主要驱动因素和后果
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-07-03 DOI: 10.1007/s11023-024-09688-8
Fernanda Odilla
{"title":"Unfairness in AI Anti-Corruption Tools: Main Drivers and Consequences","authors":"Fernanda Odilla","doi":"10.1007/s11023-024-09688-8","DOIUrl":"https://doi.org/10.1007/s11023-024-09688-8","url":null,"abstract":"<p>This article discusses the potential sources and consequences of unfairness in artificial intelligence (AI) predictive tools used for anti-corruption efforts. Using the examples of three AI-based anti-corruption tools from Brazil—risk estimation of corrupt behaviour in public procurement, among public officials, and of female straw candidates in electoral contests—it illustrates how unfairness can emerge at the infrastructural, individual, and institutional levels. The article draws on interviews with law enforcement officials directly involved in the development of anti-corruption tools, as well as academic and grey literature, including official reports and dissertations on the tools used as examples. Potential sources of unfairness include problematic data, statistical learning issues, the personal values and beliefs of developers and users, and the governance and practices within the organisations in which these tools are created and deployed. The findings suggest that the tools analysed were trained using inputs from past anti-corruption procedures and practices and based on common sense assumptions about corruption, which are not necessarily free from unfair disproportionality and discrimination. In designing the ACTs, the developers did not reflect on the risks of unfairness, nor did they prioritise the use of specific technological solutions to identify and mitigate this type of problem. Although the tools analysed do not make automated decisions and only support human action, their algorithms are not open to external scrutiny.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141547337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信