Minds and Machines最新文献

筛选
英文 中文
Reliability and Interpretability in Science and Deep Learning 科学与深度学习中的可靠性和可解释性
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-06-25 DOI: 10.1007/s11023-024-09682-0
Luigi Scorzato
{"title":"Reliability and Interpretability in Science and Deep Learning","authors":"Luigi Scorzato","doi":"10.1007/s11023-024-09682-0","DOIUrl":"https://doi.org/10.1007/s11023-024-09682-0","url":null,"abstract":"<p>In recent years, the question of the reliability of Machine Learning (ML) methods has acquired significant importance, and the analysis of the associated uncertainties has motivated a growing amount of research. However, most of these studies have applied standard error analysis to ML models—and in particular Deep Neural Network (DNN) models—which represent a rather significant departure from standard scientific modelling. It is therefore necessary to integrate the standard error analysis with a deeper epistemological analysis of the possible differences between DNN models and standard scientific modelling and the possible implications of these differences in the assessment of reliability. This article offers several contributions. First, it emphasises the ubiquitous role of model assumptions (both in ML and traditional science) against the illusion of theory-free science. Secondly, model assumptions are analysed from the point of view of their (epistemic) complexity, which is shown to be language-independent. It is argued that the high epistemic complexity of DNN models hinders the estimate of their reliability and also their prospect of long term progress. Some potential ways forward are suggested. Thirdly, this article identifies the close relation between a model’s epistemic complexity and its interpretability, as introduced in the context of responsible AI. This clarifies in which sense—and to what extent—the lack of understanding of a model (black-box problem) impacts its interpretability in a way that is independent of individual skills. It also clarifies how interpretability is a precondition for a plausible assessment of the reliability of any model, which cannot be based on statistical analysis alone. This article focuses on the comparison between traditional scientific models and DNN models. However, Random Forest (RF) and Logistic Regression (LR) models are also briefly considered.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"25 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141502538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human Autonomy at Risk? An Analysis of the Challenges from AI 人类自主面临风险?分析人工智能带来的挑战
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-06-24 DOI: 10.1007/s11023-024-09665-1
Carina Prunkl
{"title":"Human Autonomy at Risk? An Analysis of the Challenges from AI","authors":"Carina Prunkl","doi":"10.1007/s11023-024-09665-1","DOIUrl":"https://doi.org/10.1007/s11023-024-09665-1","url":null,"abstract":"<p>Autonomy is a core value that is deeply entrenched in the moral, legal, and political practices of many societies. The development and deployment of artificial intelligence (AI) have raised new questions about AI’s impacts on human autonomy. However, systematic assessments of these impacts are still rare and often held on a case-by-case basis. In this article, I provide a conceptual framework that both ties together seemingly disjoint issues about human autonomy, as well as highlights differences between them. In the first part, I distinguish between distinct concerns that are currently addressed under the umbrella term ‘human autonomy’. In particular, I show how differentiating between autonomy-as-authenticity and autonomy-as-agency helps us to pinpoint separate challenges from AI deployment. Some of these challenges are already well-known (e.g. online manipulation or limitation of freedom), whereas others have received much less attention (e.g. adaptive preference formation). In the second part, I address the different roles AI systems can assume in the context of autonomy. In particular, I differentiate between AI systems taking on agential roles and AI systems being used as tools. I conclude that while there is no ‘silver bullet’ to address concerns about human autonomy, considering its various dimensions can help us to systematically address the associated risks.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"18 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141502539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anthropomorphizing Machines: Reality or Popular Myth? 机器拟人化:现实还是大众神话?
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-06-20 DOI: 10.1007/s11023-024-09686-w
Simon Coghlan
{"title":"Anthropomorphizing Machines: Reality or Popular Myth?","authors":"Simon Coghlan","doi":"10.1007/s11023-024-09686-w","DOIUrl":"https://doi.org/10.1007/s11023-024-09686-w","url":null,"abstract":"<p>According to a widespread view, people often anthropomorphize machines such as certain robots and computer and AI systems by erroneously attributing mental states to them. On this view, people almost irresistibly believe, even if only subconsciously, that machines with certain human-like features really have phenomenal or subjective experiences like sadness, happiness, desire, pain, joy, and distress, even though they lack such feelings. This paper questions this view by critiquing common arguments used to support it and by suggesting an alternative explanation. Even if people’s behavior and language regarding human-like machines suggests they believe those machines really have mental states, it is possible that they do not believe that at all. The paper also briefly discusses potential implications of regarding such anthropomorphism as a popular myth. The exercise illuminates the difficult concept of anthropomorphism, helping to clarify possible human relations with or toward machines that increasingly resemble humans and animals.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"79 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141502529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“The Human Must Remain the Central Focus”: Subjective Fairness Perceptions in Automated Decision-Making "人必须仍然是中心焦点":自动决策中的主观公平感
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-06-19 DOI: 10.1007/s11023-024-09684-y
Daria Szafran, Ruben L. Bach
{"title":"“The Human Must Remain the Central Focus”: Subjective Fairness Perceptions in Automated Decision-Making","authors":"Daria Szafran, Ruben L. Bach","doi":"10.1007/s11023-024-09684-y","DOIUrl":"https://doi.org/10.1007/s11023-024-09684-y","url":null,"abstract":"<p>The increasing use of algorithms in allocating resources and services in both private industry and public administration has sparked discussions about their consequences for inequality and fairness in contemporary societies. Previous research has shown that the use of automated decision-making (ADM) tools in high-stakes scenarios like the legal justice system might lead to adverse societal outcomes, such as systematic discrimination. Scholars have since proposed a variety of metrics to counteract and mitigate biases in ADM processes. While these metrics focus on technical fairness notions, they do not consider how members of the public, as most affected subjects by algorithmic decisions, perceive fairness in ADM. To shed light on subjective fairness perceptions of individuals, this study analyzes individuals’ answers to open-ended fairness questions about hypothetical ADM scenarios that were embedded in the German Internet Panel (Wave 54, July 2021), a probability-based longitudinal online survey. Respondents evaluated the fairness of vignettes describing the use of ADM tools across different contexts. Subsequently, they explained their fairness evaluation providing a textual answer. Using qualitative content analysis, we inductively coded those answers (<i>N</i> = 3697). Based on their individual understanding of fairness, respondents addressed a wide range of aspects related to fairness in ADM which is reflected in the 23 codes we identified. We subsumed those codes under four overarching themes: <i>Human elements in decision-making</i>, <i>Shortcomings of the data</i>, <i>Social impact of AI</i>, and <i>Properties of AI</i>. Our codes and themes provide a valuable resource for understanding which factors influence public fairness perceptions about ADM.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"31 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141502542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Teleological Approach to Information Systems Design 信息系统设计的目的论方法
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-06-18 DOI: 10.1007/s11023-024-09673-1
Mattia Fumagalli, Roberta Ferrario, Giancarlo Guizzardi
{"title":"A Teleological Approach to Information Systems Design","authors":"Mattia Fumagalli, Roberta Ferrario, Giancarlo Guizzardi","doi":"10.1007/s11023-024-09673-1","DOIUrl":"https://doi.org/10.1007/s11023-024-09673-1","url":null,"abstract":"<p>In recent years, the design and production of information systems have seen significant growth. However, these <i>information artefacts</i> often exhibit characteristics that compromise their reliability. This issue appears to stem from the neglect or underestimation of certain crucial aspects in the application of <i>Information Systems Design (ISD)</i>. For example, it is frequently difficult to prove when one of these products does not work properly or works incorrectly (<i>falsifiability</i>), their usage is often left to subjective experience and somewhat arbitrary choices (<i>anecdotes</i>), and their functions are often obscure for users as well as designers (<i>explainability</i>). In this paper, we propose an approach that can be used to support the <i>analysis</i> and <i>re-(design)</i> of information systems grounded on a well-known theory of information, namely, <i>teleosemantics</i>. This approach emphasizes the importance of grounding the design and validation process on dependencies between four core components: the <i>producer (or designer)</i>, the <i>produced (or used) information system</i>, the <i>consumer (or user)</i>, and the <i>design (or use) purpose</i>. We analyze the ambiguities and problems of considering these components separately. We then present some possible ways in which they can be combined through the teleological approach. Also, we debate guidelines to prevent ISD from failing to address critical issues. Finally, we discuss perspectives on applications over real existing information technologies and some implications for explainable AI and ISD.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"89 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141502525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In the Craftsman’s Garden: AI, Alan Turing, and Stanley Cavell 在工匠的花园里人工智能、阿兰-图灵和斯坦利-卡维尔
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-06-13 DOI: 10.1007/s11023-024-09676-y
Marie Theresa O’Connor
{"title":"In the Craftsman’s Garden: AI, Alan Turing, and Stanley Cavell","authors":"Marie Theresa O’Connor","doi":"10.1007/s11023-024-09676-y","DOIUrl":"https://doi.org/10.1007/s11023-024-09676-y","url":null,"abstract":"<p>There is rising skepticism within public discourse about the nature of AI. By skepticism, I mean doubt about what we know about AI. At the same time, some AI speakers are raising the kinds of issues that usually really matter in analysis, such as issues relating to consent and coercion. This essay takes up the question of whether we should analyze a conversation differently because it is between a human and AI instead of between two humans and, if so, why. When is it okay, for instance, to read the phrases “please stop” or “please respect my boundaries” as meaning something other than what those phrases ordinarily mean – and what makes it so? If we ignore denials of consent, or put them in scare quotes, we should have a good reason. This essay focuses on two thinkers, Alan Turing and Stanley Cavell, who in different ways answer the question of whether it matters that a speaker is a machine. It proposes that Cavell’s work on the problem of other minds, in particular Cavell’s story in <i>The Claim of Reason </i>of an automaton whom he imagines meeting in a craftsman’s garden, may be especially helpful in thinking about how to analyze what AI has to say.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"19 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141502540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Find the Gap: AI, Responsible Agency and Vulnerability 寻找差距:人工智能、负责任的机构和脆弱性
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-06-05 DOI: 10.1007/s11023-024-09674-0
Shannon Vallor, Tillmann Vierkant
{"title":"Find the Gap: AI, Responsible Agency and Vulnerability","authors":"Shannon Vallor, Tillmann Vierkant","doi":"10.1007/s11023-024-09674-0","DOIUrl":"https://doi.org/10.1007/s11023-024-09674-0","url":null,"abstract":"<p>The <i>responsibility gap</i>, commonly described as a core challenge for the effective governance of, and trust in, AI and autonomous systems (AI/AS), is traditionally associated with a failure of the epistemic and/or the control condition of moral responsibility: the ability to know what we are doing and exercise competent control over this doing. Yet these two conditions are a red herring when it comes to understanding the responsibility challenges presented by AI/AS, since evidence from the cognitive sciences shows that individual humans face very similar responsibility challenges with regard to these two conditions. While the problems of epistemic opacity and attenuated behaviour control are not unique to AI/AS technologies (though they can be exacerbated by them), we show that we can learn important lessons for AI/AS development and governance from how philosophers have recently revised the traditional concept of moral responsibility in response to these challenges to responsible human agency from the cognitive sciences. The resulting instrumentalist views of responsibility, which emphasize the forward-looking and flexible role of agency cultivation, hold considerable promise for integrating AI/AS into a healthy moral ecology. We note that there nevertheless <i>is</i> a gap in AI/AS responsibility that has yet to be extensively studied and addressed, one grounded in a relational asymmetry of <i>vulnerability</i> between human agents and sociotechnical systems like AI/AS. In the conclusion of this paper we note that attention to this vulnerability gap must inform and enable future attempts to construct trustworthy AI/AS systems and preserve the conditions for responsible human agency.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"39 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141255356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Models of Possibilities Instead of Logic as the Basis of Human Reasoning 可能性模型而非逻辑是人类推理的基础
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-06-04 DOI: 10.1007/s11023-024-09662-4
P. N. Johnson-Laird, Ruth M. J. Byrne, Sangeet S. Khemlani
{"title":"Models of Possibilities Instead of Logic as the Basis of Human Reasoning","authors":"P. N. Johnson-Laird, Ruth M. J. Byrne, Sangeet S. Khemlani","doi":"10.1007/s11023-024-09662-4","DOIUrl":"https://doi.org/10.1007/s11023-024-09662-4","url":null,"abstract":"<p>The theory of mental models and its computer implementations have led to crucial experiments showing that no standard logic—the sentential calculus and all logics that include it—can underlie human reasoning. The theory replaces the logical concept of validity (the conclusion is true in all cases in which the premises are true) with necessity (conclusions describe no more than possibilities to which the premises refer). Many inferences are both necessary and valid. But experiments show that individuals make necessary inferences that are invalid, e.g., <i>Few people ate steak or sole</i>; therefore, <i>few people ate steak</i>. Other crucial experiments show that individuals reject inferences that are not necessary but valid, e.g., <i>He had the anesthetic or felt pain, but not both</i>; therefore, <i>he had the anesthetic or felt pain, or both</i>. Nothing in logic can justify the rejection of a valid inference: a denial of its conclusion is inconsistent with its premises, and inconsistencies yield valid inferences of any conclusions whatsoever including the one denied. So inconsistencies are catastrophic in logic. In contrast, the model theory treats all inferences as defeasible (nonmonotonic), and inconsistencies have the null model, which yields only the null model in conjunction with any other premises. So inconsistences are local. Which allows truth values in natural languages to be much richer than those that occur in the semantics of standard logics; and individuals verify assertions on the basis of both facts and possibilities that did not occur.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"48 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141259848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Hierarchical Correspondence View of Levels: A Case Study in Cognitive Science 层次对应观:认知科学案例研究
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-06-03 DOI: 10.1007/s11023-024-09678-w
Luke Kersten
{"title":"The Hierarchical Correspondence View of Levels: A Case Study in Cognitive Science","authors":"Luke Kersten","doi":"10.1007/s11023-024-09678-w","DOIUrl":"https://doi.org/10.1007/s11023-024-09678-w","url":null,"abstract":"<p>There is a general conception of levels in philosophy which says that the world is arrayed into a hierarchy of levels and that there are different modes of analysis that correspond to each level of this hierarchy, what can be labelled the ‘Hierarchical Correspondence View of Levels” (or HCL). The trouble is that despite its considerable lineage and general status in philosophy of science and metaphysics the HCL has largely escaped analysis in specific domains of inquiry. The goal of this paper is to take up a recent call to domain-specificity by examining the role of the HCL in cognitive science. I argue that the HCL is, in fact, a conception of levels that has been employed in cognitive science and that cognitive scientists should avoid its use where possible. The argument is that the HCL is problematic when applied to cognitive science specifically because it fails to distinguish two important kinds of shifts used when analysing information processing systems: <i>shifts in grain</i> and <i>shifts in analysis</i>. I conclude by proposing a revised version of the HCL which accommodates the distinction.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"193 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141254981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The New Mechanistic Approach and Cognitive Ontology—Or: What Role do (Neural) Mechanisms Play in Cognitive Ontology? 新机制方法与认知本体论--或者说:(神经)机制在认知本体论中扮演什么角色?
IF 7.4 3区 计算机科学
Minds and Machines Pub Date : 2024-06-02 DOI: 10.1007/s11023-024-09679-9
Beate Krickel
{"title":"The New Mechanistic Approach and Cognitive Ontology—Or: What Role do (Neural) Mechanisms Play in Cognitive Ontology?","authors":"Beate Krickel","doi":"10.1007/s11023-024-09679-9","DOIUrl":"https://doi.org/10.1007/s11023-024-09679-9","url":null,"abstract":"<p>Cognitive ontology has become a popular topic in philosophy, cognitive psychology, and cognitive neuroscience. At its center is the question of which cognitive capacities should be included in the ontology of cognitive psychology and cognitive neuroscience. One common strategy for answering this question is to look at brain structures and determine the cognitive capacities for which they are responsible. Some authors interpret this strategy as a search for <i>neural mechanisms</i>, as understood by the so-called <i>new mechanistic approach</i>. In this article, I will show that this <i>new mechanistic answer</i> is confronted with what I call the <i>triviality problem</i>. A discussion of this problem will show that one cannot derive a meaningful cognitive ontology from neural mechanisms alone. Nonetheless, neural mechanisms play a crucial role in the discovery of a cognitive ontology because they are <i>epistemic proxies for best systematizations</i>.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"35 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141193717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信