{"title":"How to Make “AI” Intelligent; or, The Question of Epistemic Equality","authors":"Christopher Newfield","doi":"10.1215/2834703x-10734076","DOIUrl":null,"url":null,"abstract":"Abstract Critics have identified a set of operational flaws in the machine language and deep learning systems now discussed under the “AI” banner. Five of the most discussed are social biases, particularly racism; opacity, such that users cannot assess how results were generated; coercion, in that architectures, datasets, algorithms, and the like are controlled by designers and platforms rather than users; systemic privacy violations; and the absence of academic freedom covering corporation-based research, such that results can be hyped in accordance with business objectives or suppressed and distorted if not. This article focuses on a sixth problem with AI, which is that the term intelligence misstates the actual status and effects of the technologies in question. To help fill the gap in rigorous uses of “intelligence” in public discussion, it analyzes Brian Cantwell Smith's The Promise of Artificial Intelligence (2019), noting humanities disciplines routinely operate with Smith's demanding notion of “genuine intelligence.” To get this notion into circulation among technologists, the article calls for replacement of the Two Cultures hierarchy codified by C. P. Snow in the 1950s with a system in which humanities scholars participate from the start in the construction and evaluation of “AI” research programs on a basis of epistemic equality between qualitative and quantitative disciplines.","PeriodicalId":500906,"journal":{"name":"Critical AI","volume":"68 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Critical AI","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1215/2834703x-10734076","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Abstract Critics have identified a set of operational flaws in the machine language and deep learning systems now discussed under the “AI” banner. Five of the most discussed are social biases, particularly racism; opacity, such that users cannot assess how results were generated; coercion, in that architectures, datasets, algorithms, and the like are controlled by designers and platforms rather than users; systemic privacy violations; and the absence of academic freedom covering corporation-based research, such that results can be hyped in accordance with business objectives or suppressed and distorted if not. This article focuses on a sixth problem with AI, which is that the term intelligence misstates the actual status and effects of the technologies in question. To help fill the gap in rigorous uses of “intelligence” in public discussion, it analyzes Brian Cantwell Smith's The Promise of Artificial Intelligence (2019), noting humanities disciplines routinely operate with Smith's demanding notion of “genuine intelligence.” To get this notion into circulation among technologists, the article calls for replacement of the Two Cultures hierarchy codified by C. P. Snow in the 1950s with a system in which humanities scholars participate from the start in the construction and evaluation of “AI” research programs on a basis of epistemic equality between qualitative and quantitative disciplines.
批评人士已经发现了机器语言和深度学习系统中存在的一系列操作缺陷,这些系统现在被称为“人工智能”。讨论最多的五个是社会偏见,尤其是种族主义;不透明,用户无法评估结果是如何产生的;强制,即架构、数据集、算法等由设计师和平台控制,而不是由用户控制;系统性侵犯隐私;基于企业的研究缺乏学术自由,结果可能会根据企业目标进行炒作,如果不是,则会受到压制和扭曲。本文关注的是人工智能的第六个问题,即“智能”一词错误地描述了相关技术的实际状态和影响。为了填补在公共讨论中严格使用“智能”的空白,它分析了布莱恩·坎特韦尔·史密斯(Brian Cantwell Smith)的《人工智能的承诺》(the Promise of Artificial intelligence, 2019),指出人文学科通常使用史密斯严苛的“真正智能”概念。为了让这一概念在技术专家中广为流传,文章呼吁用人文学者在定性和定量学科之间的认知平等的基础上,从一开始就参与“人工智能”研究项目的构建和评估的制度来取代C. P. Snow在20世纪50年代制定的两种文化等级制度。