{"title":"Tool-Augmented Human Creativity","authors":"Kjell Jørgen Hole","doi":"10.1007/s11023-024-09677-x","DOIUrl":"https://doi.org/10.1007/s11023-024-09677-x","url":null,"abstract":"<p>Creativity is the hallmark of human intelligence. Roli et al. (Frontiers in Ecology and Evolution 9:806283, 2022) state that algorithms cannot achieve human creativity. This paper analyzes cooperation between humans and intelligent algorithmic tools to compensate for algorithms’ limited creativity. The intelligent tools have functionality from the neocortex, the brain’s center for learning, reasoning, planning, and language. The analysis provides four key insights about human-tool cooperation to solve challenging problems. First, no neocortex-based tool without feelings can achieve human creativity. Second, an interactive tool exploring users’ feeling-guided creativity enhances the ability to solve complex problems. Third, user-led abductive reasoning incorporating human creativity is essential to human-tool cooperative problem-solving. Fourth, although stakeholders must take moral responsibility for the adverse impact of tool answers, it is still essential to teach tools moral values to generate trustworthy answers. The analysis concludes that the scientific community should create neocortex-based tools to augment human creativity and enhance problem-solving rather than creating autonomous algorithmic entities with independent but less creative problem-solving.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"52 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141172805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Black-Box Testing and Auditing of Bias in ADM Systems","authors":"Tobias D. Krafft, Marc P. Hauer, Katharina Zweig","doi":"10.1007/s11023-024-09666-0","DOIUrl":"https://doi.org/10.1007/s11023-024-09666-0","url":null,"abstract":"<p>For years, the number of opaque algorithmic decision-making systems (ADM systems) with a large impact on society has been increasing: e.g., systems that compute decisions about future recidivism of criminals, credit worthiness, or the many small decision computing systems within social networks that create rankings, provide recommendations, or filter content. Concerns that such a system makes biased decisions can be difficult to investigate: be it by people affected, NGOs, stakeholders, governmental testing and auditing authorities, or other external parties. Scientific testing and auditing literature rarely focuses on the specific needs for such investigations and suffers from ambiguous terminologies. With this paper, we aim to support this investigation process by collecting, explaining, and categorizing methods of testing for bias, which are applicable to black-box systems, given that inputs and respective outputs can be observed. For this purpose, we provide a taxonomy that can be used to select suitable test methods adapted to the respective situation. This taxonomy takes multiple aspects into account, for example the effort to implement a given test method, its technical requirement (such as the need of ground truth) and social constraints of the investigation, e.g., the protection of business secrets. Furthermore, we analyze which test method can be used in the context of which black box audit concept. It turns out that various factors, such as the type of black box audit or the lack of an oracle, may limit the selection of applicable tests. With the help of this paper, people or organizations who want to test an ADM system for bias can identify which test methods and auditing concepts are applicable and what implications they entail.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"13 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141172758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reflective Artificial Intelligence","authors":"Peter R. Lewis, Ştefan Sarkadi","doi":"10.1007/s11023-024-09664-2","DOIUrl":"https://doi.org/10.1007/s11023-024-09664-2","url":null,"abstract":"<p>As artificial intelligence (AI) technology advances, we increasingly delegate mental tasks to machines. However, today’s AI systems usually do these tasks with an unusual imbalance of insight and understanding: new, deeper insights are present, yet many important qualities that a human mind would have previously brought to the activity are utterly absent. Therefore, it is crucial to ask which features of minds have we replicated, which are missing, and if that matters. One core feature that humans bring to tasks, when dealing with the ambiguity, emergent knowledge, and social context presented by the world, is reflection. Yet this capability is completely missing from current mainstream AI. In this paper we ask what <i>reflective AI</i> might look like. Then, drawing on notions of reflection in complex systems, cognitive science, and agents, we sketch an architecture for reflective AI agents, and highlight ways forward.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"43 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141060373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Regulation by Design: Features, Practices, Limitations, and Governance Implications","authors":"Kostina Prifti, Jessica Morley, Claudio Novelli, Luciano Floridi","doi":"10.1007/s11023-024-09675-z","DOIUrl":"https://doi.org/10.1007/s11023-024-09675-z","url":null,"abstract":"<p>Regulation by design (RBD) is a growing research field that explores, develops, and criticises the regulative function of design. In this article, we provide a qualitative thematic synthesis of the existing literature. The aim is to explore and analyse RBD’s core features, practices, limitations, and related governance implications. To fulfil this aim, we examine the extant literature on RBD in the context of digital technologies. We start by identifying and structuring the core features of RBD, namely the goals, regulators, regulatees, methods, and technologies. Building on that structure, we distinguish among three types of RBD practices: compliance by design, value creation by design, and optimisation by design. We then explore the challenges and limitations of RBD practices, which stem from risks associated with compliance by design, contextual limitations, or methodological uncertainty. Finally, we examine the governance implications of RBD and outline possible future directions of the research field and its practices.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"15 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141060283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards Transnational Fairness in Machine Learning: A Case Study in Disaster Response Systems","authors":"Cem Kozcuer, Anne Mollen, Felix Bießmann","doi":"10.1007/s11023-024-09663-3","DOIUrl":"https://doi.org/10.1007/s11023-024-09663-3","url":null,"abstract":"<p>Research on fairness in machine learning (ML) has been largely focusing on individual and group fairness. With the adoption of ML-based technologies as assistive technology in complex societal transformations or crisis situations on a global scale these existing definitions fail to account for algorithmic fairness transnationally. We propose to complement existing perspectives on algorithmic fairness with a notion of transnational algorithmic fairness and take first steps towards an analytical framework. We exemplify the relevance of a transnational fairness assessment in a case study on a disaster response system using images from online social media. In the presented case, ML systems are used as a support tool in categorizing and classifying images from social media after a disaster event as an almost instantly available source of information for coordinating disaster response. We present an empirical analysis assessing the transnational fairness of the application’s outputs-based on national socio-demographic development indicators as potentially discriminatory attributes. In doing so, the paper combines interdisciplinary perspectives from data analytics, ML, digital media studies and media sociology in order to address fairness beyond the technical system. The case study investigated reflects an embedded perspective of peoples’ everyday media use and social media platforms as the producers of sociality and processing data-with relevance far beyond the case of algorithmic fairness in disaster scenarios. Especially in light of the concentration of artificial intelligence (AI) development in the Global North and a perceived hegemonic constellation, we argue that transnational fairness offers a perspective on global injustices in relation to AI development and application that has the potential to substantiate discussions by identifying gaps in data and technology. These analyses ultimately will enable researchers and policy makers to derive actionable insights that could alleviate existing problems with fair use of AI technology and mitigate risks associated with future developments.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"54 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140937549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AI Within Online Discussions: Rational, Civil, Privileged?","authors":"Jonas Aaron Carstens, Dennis Friess","doi":"10.1007/s11023-024-09658-0","DOIUrl":"https://doi.org/10.1007/s11023-024-09658-0","url":null,"abstract":"<p>While early optimists have seen online discussions as potential spaces for deliberation, the reality of many online spaces is characterized by incivility and irrationality. Increasingly, AI tools are considered as a solution to foster deliberative discourse. Against the backdrop of previous research, we show that AI tools for online discussions heavily focus on the deliberative norms of rationality and civility. In the operationalization of those norms for AI tools, the complex deliberative dimensions are simplified, and the focus lies on the detection of argumentative structures in argument mining or verbal markers of supposedly uncivil comments. If the fairness of such tools is considered, the focus lies on data bias and an input–output frame of the problem. We argue that looking beyond bias and analyzing such applications through a sociotechnical frame reveals how they interact with social hierarchies and inequalities, reproducing patterns of exclusion. The current focus on verbal markers of incivility and argument mining risks excluding minority voices and privileges those who have more access to education. Finally, we present a normative argument why examining AI tools for online discourses through a sociotechnical frame is ethically preferable, as ignoring the predicable negative effects we describe would present a form of objectionable indifference.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"13 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140882011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Genealogical Approach to Algorithmic Bias","authors":"Marta Ziosi, David Watson, Luciano Floridi","doi":"10.1007/s11023-024-09672-2","DOIUrl":"https://doi.org/10.1007/s11023-024-09672-2","url":null,"abstract":"<p>The Fairness, Accountability, and Transparency (FAccT) literature tends to focus on bias as a problem that requires <i>ex post</i> solutions (e.g. fairness metrics), rather than addressing the underlying social and technical conditions that (re)produce it. In this article, we propose a complementary strategy that uses genealogy as a constructive, epistemic critique to explain algorithmic bias in terms of the conditions that enable it. We focus on XAI feature attributions (Shapley values) and counterfactual approaches as potential tools to gauge these conditions and offer two main contributions. One is constructive: we develop a theoretical framework to classify these approaches according to their relevance for bias as evidence of social disparities. We draw on Pearl’s ladder of causation (Causality: models, reasoning, and inference. Cambridge University Press, Cambridge, 2000, Causality, 2nd edn. Cambridge University Press, Cambridge, 2009. https://doi.org/10.1017/CBO9780511803161) to order these XAI approaches concerning their ability to answer fairness-relevant questions and identify fairness-relevant solutions. The other contribution is critical: we evaluate these approaches in terms of their assumptions about the role of protected characteristics in discriminatory outcomes. We achieve this by building on Kohler-Hausmann’s (Northwest Univ Law Rev 113(5):1163–1227, 2019) constructivist theory of discrimination. We derive three recommendations for XAI practitioners to develop and AI policymakers to regulate tools that address algorithmic bias in its conditions and hence mitigate its future occurrence.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"16 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140881975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Anthropomorphising Machines and Computerising Minds: The Crosswiring of Languages between Artificial Intelligence and Brain & Cognitive Sciences","authors":"Luciano Floridi, Anna C Nobre","doi":"10.1007/s11023-024-09670-4","DOIUrl":"https://doi.org/10.1007/s11023-024-09670-4","url":null,"abstract":"<p>The article discusses the process of “conceptual borrowing”, according to which, when a new discipline emerges, it develops its technical vocabulary also by appropriating terms from other neighbouring disciplines. The phenomenon is likened to Carl Schmitt’s observation that modern political concepts have theological roots. The authors argue that, through extensive conceptual borrowing, AI has ended up describing computers anthropomorphically, as computational brains with psychological properties, while brain and cognitive sciences have ended up describing brains and minds computationally and informationally, as biological computers. The crosswiring between the technical languages of these disciplines is not merely metaphorical but can lead to confusion, and damaging assumptions and consequences. The article ends on an optimistic note about the self-adjusting nature of technical meanings in language and the ability to leave misleading conceptual baggage behind when confronted with advancement in understanding and factual knowledge.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"17 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140804015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"We are Building Gods: AI as the Anthropomorphised Authority of the Past","authors":"Carl Öhman","doi":"10.1007/s11023-024-09667-z","DOIUrl":"https://doi.org/10.1007/s11023-024-09667-z","url":null,"abstract":"<p>This article argues that large language models (LLMs) should be interpreted as a form of gods. In a theological sense, a god is an immortal being that exists beyond time and space. This is clearly nothing like LLMs. In an anthropological sense, however, a god is rather defined as the personified authority of a group through time—a conceptual tool that molds a collective of ancestors into a unified agent or voice. This is exactly what LLMs are. They are products of vast volumes of data, literally traces of past human (speech) acts, synthesized into a single agency that is (falsely) experienced by users as extra-human. This reconceptualization, I argue, opens up new avenues of critique of LLMs by allowing the mobilization of theoretical resources from centuries of religious critique. For illustration, I draw on the Marxian religious philosophy of Martin Hägglund. From this perspective, the danger of LLMs emerge not only as bias or unpredictability, but as a temptation to abdicate our spiritual and ultimately democratic freedom in favor of what I call a <i>tyranny of the past</i>.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"1 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140803846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kristian Gonzalez Barman, Sascha Caron, Tom Claassen, Henk de Regt
{"title":"Towards a Benchmark for Scientific Understanding in Humans and Machines","authors":"Kristian Gonzalez Barman, Sascha Caron, Tom Claassen, Henk de Regt","doi":"10.1007/s11023-024-09657-1","DOIUrl":"https://doi.org/10.1007/s11023-024-09657-1","url":null,"abstract":"<p>Scientific understanding is a fundamental goal of science. However, there is currently no good way to measure the scientific understanding of agents, whether these be humans or Artificial Intelligence systems. Without a clear benchmark, it is challenging to evaluate and compare different levels of scientific understanding. In this paper, we propose a framework to create a benchmark for scientific understanding, utilizing tools from philosophy of science. We adopt a behavioral conception of understanding, according to which genuine understanding should be recognized as an ability to perform certain tasks. We extend this notion of scientific understanding by considering a set of questions that gauge different levels of scientific understanding, covering information retrieval, the capability to arrange information to produce an explanation, and the ability to infer how things would be different under different circumstances. We suggest building a Scientific Understanding Benchmark (SUB), formed by a set of these tests, allowing for the evaluation and comparison of scientific understanding. Benchmarking plays a crucial role in establishing trust, ensuring quality control, and providing a basis for performance evaluation. By aligning machine and human scientific understanding we can improve their utility, ultimately advancing scientific understanding and helping to discover new insights within machines.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"11 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140804017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}