Minds and Machines最新文献

筛选
英文 中文
Tool-Augmented Human Creativity 工具辅助人类创造力
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-05-26 DOI: 10.1007/s11023-024-09677-x
Kjell Jørgen Hole
{"title":"Tool-Augmented Human Creativity","authors":"Kjell Jørgen Hole","doi":"10.1007/s11023-024-09677-x","DOIUrl":"https://doi.org/10.1007/s11023-024-09677-x","url":null,"abstract":"<p>Creativity is the hallmark of human intelligence. Roli et al. (Frontiers in Ecology and Evolution 9:806283, 2022) state that algorithms cannot achieve human creativity. This paper analyzes cooperation between humans and intelligent algorithmic tools to compensate for algorithms’ limited creativity. The intelligent tools have functionality from the neocortex, the brain’s center for learning, reasoning, planning, and language. The analysis provides four key insights about human-tool cooperation to solve challenging problems. First, no neocortex-based tool without feelings can achieve human creativity. Second, an interactive tool exploring users’ feeling-guided creativity enhances the ability to solve complex problems. Third, user-led abductive reasoning incorporating human creativity is essential to human-tool cooperative problem-solving. Fourth, although stakeholders must take moral responsibility for the adverse impact of tool answers, it is still essential to teach tools moral values to generate trustworthy answers. The analysis concludes that the scientific community should create neocortex-based tools to augment human creativity and enhance problem-solving rather than creating autonomous algorithmic entities with independent but less creative problem-solving.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"52 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141172805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Black-Box Testing and Auditing of Bias in ADM Systems ADM 系统中的黑盒测试和偏差审计
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-05-25 DOI: 10.1007/s11023-024-09666-0
Tobias D. Krafft, Marc P. Hauer, Katharina Zweig
{"title":"Black-Box Testing and Auditing of Bias in ADM Systems","authors":"Tobias D. Krafft, Marc P. Hauer, Katharina Zweig","doi":"10.1007/s11023-024-09666-0","DOIUrl":"https://doi.org/10.1007/s11023-024-09666-0","url":null,"abstract":"<p>For years, the number of opaque algorithmic decision-making systems (ADM systems) with a large impact on society has been increasing: e.g., systems that compute decisions about future recidivism of criminals, credit worthiness, or the many small decision computing systems within social networks that create rankings, provide recommendations, or filter content. Concerns that such a system makes biased decisions can be difficult to investigate: be it by people affected, NGOs, stakeholders, governmental testing and auditing authorities, or other external parties. Scientific testing and auditing literature rarely focuses on the specific needs for such investigations and suffers from ambiguous terminologies. With this paper, we aim to support this investigation process by collecting, explaining, and categorizing methods of testing for bias, which are applicable to black-box systems, given that inputs and respective outputs can be observed. For this purpose, we provide a taxonomy that can be used to select suitable test methods adapted to the respective situation. This taxonomy takes multiple aspects into account, for example the effort to implement a given test method, its technical requirement (such as the need of ground truth) and social constraints of the investigation, e.g., the protection of business secrets. Furthermore, we analyze which test method can be used in the context of which black box audit concept. It turns out that various factors, such as the type of black box audit or the lack of an oracle, may limit the selection of applicable tests. With the help of this paper, people or organizations who want to test an ADM system for bias can identify which test methods and auditing concepts are applicable and what implications they entail.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"13 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141172758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reflective Artificial Intelligence 反思型人工智能
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-05-18 DOI: 10.1007/s11023-024-09664-2
Peter R. Lewis, Ştefan Sarkadi
{"title":"Reflective Artificial Intelligence","authors":"Peter R. Lewis, Ştefan Sarkadi","doi":"10.1007/s11023-024-09664-2","DOIUrl":"https://doi.org/10.1007/s11023-024-09664-2","url":null,"abstract":"<p>As artificial intelligence (AI) technology advances, we increasingly delegate mental tasks to machines. However, today’s AI systems usually do these tasks with an unusual imbalance of insight and understanding: new, deeper insights are present, yet many important qualities that a human mind would have previously brought to the activity are utterly absent. Therefore, it is crucial to ask which features of minds have we replicated, which are missing, and if that matters. One core feature that humans bring to tasks, when dealing with the ambiguity, emergent knowledge, and social context presented by the world, is reflection. Yet this capability is completely missing from current mainstream AI. In this paper we ask what <i>reflective AI</i> might look like. Then, drawing on notions of reflection in complex systems, cognitive science, and agents, we sketch an architecture for reflective AI agents, and highlight ways forward.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"43 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141060373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Regulation by Design: Features, Practices, Limitations, and Governance Implications 设计监管:特点、做法、局限性和治理影响
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-05-17 DOI: 10.1007/s11023-024-09675-z
Kostina Prifti, Jessica Morley, Claudio Novelli, Luciano Floridi
{"title":"Regulation by Design: Features, Practices, Limitations, and Governance Implications","authors":"Kostina Prifti, Jessica Morley, Claudio Novelli, Luciano Floridi","doi":"10.1007/s11023-024-09675-z","DOIUrl":"https://doi.org/10.1007/s11023-024-09675-z","url":null,"abstract":"<p>Regulation by design (RBD) is a growing research field that explores, develops, and criticises the regulative function of design. In this article, we provide a qualitative thematic synthesis of the existing literature. The aim is to explore and analyse RBD’s core features, practices, limitations, and related governance implications. To fulfil this aim, we examine the extant literature on RBD in the context of digital technologies. We start by identifying and structuring the core features of RBD, namely the goals, regulators, regulatees, methods, and technologies. Building on that structure, we distinguish among three types of RBD practices: compliance by design, value creation by design, and optimisation by design. We then explore the challenges and limitations of RBD practices, which stem from risks associated with compliance by design, contextual limitations, or methodological uncertainty. Finally, we examine the governance implications of RBD and outline possible future directions of the research field and its practices.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"15 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141060283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Transnational Fairness in Machine Learning: A Case Study in Disaster Response Systems 在机器学习中实现跨国公平:灾害响应系统案例研究
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-05-09 DOI: 10.1007/s11023-024-09663-3
Cem Kozcuer, Anne Mollen, Felix Bießmann
{"title":"Towards Transnational Fairness in Machine Learning: A Case Study in Disaster Response Systems","authors":"Cem Kozcuer, Anne Mollen, Felix Bießmann","doi":"10.1007/s11023-024-09663-3","DOIUrl":"https://doi.org/10.1007/s11023-024-09663-3","url":null,"abstract":"<p>Research on fairness in machine learning (ML) has been largely focusing on individual and group fairness. With the adoption of ML-based technologies as assistive technology in complex societal transformations or crisis situations on a global scale these existing definitions fail to account for algorithmic fairness transnationally. We propose to complement existing perspectives on algorithmic fairness with a notion of transnational algorithmic fairness and take first steps towards an analytical framework. We exemplify the relevance of a transnational fairness assessment in a case study on a disaster response system using images from online social media. In the presented case, ML systems are used as a support tool in categorizing and classifying images from social media after a disaster event as an almost instantly available source of information for coordinating disaster response. We present an empirical analysis assessing the transnational fairness of the application’s outputs-based on national socio-demographic development indicators as potentially discriminatory attributes. In doing so, the paper combines interdisciplinary perspectives from data analytics, ML, digital media studies and media sociology in order to address fairness beyond the technical system. The case study investigated reflects an embedded perspective of peoples’ everyday media use and social media platforms as the producers of sociality and processing data-with relevance far beyond the case of algorithmic fairness in disaster scenarios. Especially in light of the concentration of artificial intelligence (AI) development in the Global North and a perceived hegemonic constellation, we argue that transnational fairness offers a perspective on global injustices in relation to AI development and application that has the potential to substantiate discussions by identifying gaps in data and technology. These analyses ultimately will enable researchers and policy makers to derive actionable insights that could alleviate existing problems with fair use of AI technology and mitigate risks associated with future developments.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"54 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140937549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI Within Online Discussions: Rational, Civil, Privileged? 在线讨论中的人工智能:理性、文明、特权?
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-05-04 DOI: 10.1007/s11023-024-09658-0
Jonas Aaron Carstens, Dennis Friess
{"title":"AI Within Online Discussions: Rational, Civil, Privileged?","authors":"Jonas Aaron Carstens, Dennis Friess","doi":"10.1007/s11023-024-09658-0","DOIUrl":"https://doi.org/10.1007/s11023-024-09658-0","url":null,"abstract":"<p>While early optimists have seen online discussions as potential spaces for deliberation, the reality of many online spaces is characterized by incivility and irrationality. Increasingly, AI tools are considered as a solution to foster deliberative discourse. Against the backdrop of previous research, we show that AI tools for online discussions heavily focus on the deliberative norms of rationality and civility. In the operationalization of those norms for AI tools, the complex deliberative dimensions are simplified, and the focus lies on the detection of argumentative structures in argument mining or verbal markers of supposedly uncivil comments. If the fairness of such tools is considered, the focus lies on data bias and an input–output frame of the problem. We argue that looking beyond bias and analyzing such applications through a sociotechnical frame reveals how they interact with social hierarchies and inequalities, reproducing patterns of exclusion. The current focus on verbal markers of incivility and argument mining risks excluding minority voices and privileges those who have more access to education. Finally, we present a normative argument why examining AI tools for online discourses through a sociotechnical frame is ethically preferable, as ignoring the predicable negative effects we describe would present a form of objectionable indifference.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"13 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140882011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Genealogical Approach to Algorithmic Bias 算法偏差的家谱学方法
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-05-02 DOI: 10.1007/s11023-024-09672-2
Marta Ziosi, David Watson, Luciano Floridi
{"title":"A Genealogical Approach to Algorithmic Bias","authors":"Marta Ziosi, David Watson, Luciano Floridi","doi":"10.1007/s11023-024-09672-2","DOIUrl":"https://doi.org/10.1007/s11023-024-09672-2","url":null,"abstract":"<p>The Fairness, Accountability, and Transparency (FAccT) literature tends to focus on bias as a problem that requires <i>ex post</i> solutions (e.g. fairness metrics), rather than addressing the underlying social and technical conditions that (re)produce it. In this article, we propose a complementary strategy that uses genealogy as a constructive, epistemic critique to explain algorithmic bias in terms of the conditions that enable it. We focus on XAI feature attributions (Shapley values) and counterfactual approaches as potential tools to gauge these conditions and offer two main contributions. One is constructive: we develop a theoretical framework to classify these approaches according to their relevance for bias as evidence of social disparities. We draw on Pearl’s ladder of causation (Causality: models, reasoning, and inference. Cambridge University Press, Cambridge, 2000, Causality, 2nd edn. Cambridge University Press, Cambridge, 2009. https://doi.org/10.1017/CBO9780511803161) to order these XAI approaches concerning their ability to answer fairness-relevant questions and identify fairness-relevant solutions. The other contribution is critical: we evaluate these approaches in terms of their assumptions about the role of protected characteristics in discriminatory outcomes. We achieve this by building on Kohler-Hausmann’s (Northwest Univ Law Rev 113(5):1163–1227, 2019) constructivist theory of discrimination. We derive three recommendations for XAI practitioners to develop and AI policymakers to regulate tools that address algorithmic bias in its conditions and hence mitigate its future occurrence.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"16 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140881975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anthropomorphising Machines and Computerising Minds: The Crosswiring of Languages between Artificial Intelligence and Brain & Cognitive Sciences 机器拟人化与思想计算机化:人工智能与脑科学和认知科学之间的语言交织
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-04-25 DOI: 10.1007/s11023-024-09670-4
Luciano Floridi, Anna C Nobre
{"title":"Anthropomorphising Machines and Computerising Minds: The Crosswiring of Languages between Artificial Intelligence and Brain & Cognitive Sciences","authors":"Luciano Floridi, Anna C Nobre","doi":"10.1007/s11023-024-09670-4","DOIUrl":"https://doi.org/10.1007/s11023-024-09670-4","url":null,"abstract":"<p>The article discusses the process of “conceptual borrowing”, according to which, when a new discipline emerges, it develops its technical vocabulary also by appropriating terms from other neighbouring disciplines. The phenomenon is likened to Carl Schmitt’s observation that modern political concepts have theological roots. The authors argue that, through extensive conceptual borrowing, AI has ended up describing computers anthropomorphically, as computational brains with psychological properties, while brain and cognitive sciences have ended up describing brains and minds computationally and informationally, as biological computers. The crosswiring between the technical languages of these disciplines is not merely metaphorical but can lead to confusion, and damaging assumptions and consequences. The article ends on an optimistic note about the self-adjusting nature of technical meanings in language and the ability to leave misleading conceptual baggage behind when confronted with advancement in understanding and factual knowledge.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"17 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140804015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
We are Building Gods: AI as the Anthropomorphised Authority of the Past 我们在造神:人工智能是过去的拟人化权威
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-04-25 DOI: 10.1007/s11023-024-09667-z
Carl Öhman
{"title":"We are Building Gods: AI as the Anthropomorphised Authority of the Past","authors":"Carl Öhman","doi":"10.1007/s11023-024-09667-z","DOIUrl":"https://doi.org/10.1007/s11023-024-09667-z","url":null,"abstract":"<p>This article argues that large language models (LLMs) should be interpreted as a form of gods. In a theological sense, a god is an immortal being that exists beyond time and space. This is clearly nothing like LLMs. In an anthropological sense, however, a god is rather defined as the personified authority of a group through time—a conceptual tool that molds a collective of ancestors into a unified agent or voice. This is exactly what LLMs are. They are products of vast volumes of data, literally traces of past human (speech) acts, synthesized into a single agency that is (falsely) experienced by users as extra-human. This reconceptualization, I argue, opens up new avenues of critique of LLMs by allowing the mobilization of theoretical resources from centuries of religious critique. For illustration, I draw on the Marxian religious philosophy of Martin Hägglund. From this perspective, the danger of LLMs emerge not only as bias or unpredictability, but as a temptation to abdicate our spiritual and ultimately democratic freedom in favor of what I call a <i>tyranny of the past</i>.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"1 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140803846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a Benchmark for Scientific Understanding in Humans and Machines 为人类和机器的科学理解建立基准
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-04-25 DOI: 10.1007/s11023-024-09657-1
Kristian Gonzalez Barman, Sascha Caron, Tom Claassen, Henk de Regt
{"title":"Towards a Benchmark for Scientific Understanding in Humans and Machines","authors":"Kristian Gonzalez Barman, Sascha Caron, Tom Claassen, Henk de Regt","doi":"10.1007/s11023-024-09657-1","DOIUrl":"https://doi.org/10.1007/s11023-024-09657-1","url":null,"abstract":"<p>Scientific understanding is a fundamental goal of science. However, there is currently no good way to measure the scientific understanding of agents, whether these be humans or Artificial Intelligence systems. Without a clear benchmark, it is challenging to evaluate and compare different levels of scientific understanding. In this paper, we propose a framework to create a benchmark for scientific understanding, utilizing tools from philosophy of science. We adopt a behavioral conception of understanding, according to which genuine understanding should be recognized as an ability to perform certain tasks. We extend this notion of scientific understanding by considering a set of questions that gauge different levels of scientific understanding, covering information retrieval, the capability to arrange information to produce an explanation, and the ability to infer how things would be different under different circumstances. We suggest building a Scientific Understanding Benchmark (SUB), formed by a set of these tests, allowing for the evaluation and comparison of scientific understanding. Benchmarking plays a crucial role in establishing trust, ensuring quality control, and providing a basis for performance evaluation. By aligning machine and human scientific understanding we can improve their utility, ultimately advancing scientific understanding and helping to discover new insights within machines.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"11 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140804017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信