这不是一个漏洞,而是一个特征:人工智能专家和数据科学家如何解释算法的不透明性。

IF 2.7 2区 社会学 Q1 HISTORY & PHILOSOPHY OF SCIENCE
Netta Avnoon,Gil Eyal
{"title":"这不是一个漏洞,而是一个特征:人工智能专家和数据科学家如何解释算法的不透明性。","authors":"Netta Avnoon,Gil Eyal","doi":"10.1177/03063127251364509","DOIUrl":null,"url":null,"abstract":"The opacity of machine learning (ML) algorithms is a significant concern in academic and regulatory circles. An emergent sociology of algorithms, however, argues that far from opacity being an inherent quality of algorithms, it is socially constructed and contingent upon certain choices and decisions. In this article, we show that a valorization of opacity is a key component of the epistemic culture of ML experts. While earlier campaigns for mechanical objectivity contrasted the inconsistency of human experts with the reliability of procedures and machines, we found that ML experts valorize precisely those moments when complex algorithms 'surprised' them with unexpected outcomes. They thereby endowed machines with a mysterious capacity to make predictions based on calculations and factors that humans cannot grasp. In this way, they turned opacity from a problem into an epistemic virtue. We trace this valorization of opacity to the jurisdictional struggles through which ML expertise emerged and differentiated itself from its two competitors: the 'expert systems' type of the 'artificial intelligence' sub-field of computer science on the one hand and inferential statistics on the other. In the course of these struggles, ML experts absorbed a theory of human expertise as tacit and inarticulable, extended it to include algorithms, and then leveraged this newly acquired version of opacity to dramatize the differences that separated them from statisticians. The analysis is based on sixty in-depth, semi-structured, and open-ended interviews with ML experts and data scientists working today, as well as historical research on the origins of data science.","PeriodicalId":51152,"journal":{"name":"Social Studies of Science","volume":"75 1","pages":"3063127251364509"},"PeriodicalIF":2.7000,"publicationDate":"2025-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"It's not a bug, it's a feature: How AI experts and data scientists account for the opacity of algorithms.\",\"authors\":\"Netta Avnoon,Gil Eyal\",\"doi\":\"10.1177/03063127251364509\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The opacity of machine learning (ML) algorithms is a significant concern in academic and regulatory circles. An emergent sociology of algorithms, however, argues that far from opacity being an inherent quality of algorithms, it is socially constructed and contingent upon certain choices and decisions. In this article, we show that a valorization of opacity is a key component of the epistemic culture of ML experts. While earlier campaigns for mechanical objectivity contrasted the inconsistency of human experts with the reliability of procedures and machines, we found that ML experts valorize precisely those moments when complex algorithms 'surprised' them with unexpected outcomes. They thereby endowed machines with a mysterious capacity to make predictions based on calculations and factors that humans cannot grasp. In this way, they turned opacity from a problem into an epistemic virtue. We trace this valorization of opacity to the jurisdictional struggles through which ML expertise emerged and differentiated itself from its two competitors: the 'expert systems' type of the 'artificial intelligence' sub-field of computer science on the one hand and inferential statistics on the other. In the course of these struggles, ML experts absorbed a theory of human expertise as tacit and inarticulable, extended it to include algorithms, and then leveraged this newly acquired version of opacity to dramatize the differences that separated them from statisticians. The analysis is based on sixty in-depth, semi-structured, and open-ended interviews with ML experts and data scientists working today, as well as historical research on the origins of data science.\",\"PeriodicalId\":51152,\"journal\":{\"name\":\"Social Studies of Science\",\"volume\":\"75 1\",\"pages\":\"3063127251364509\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2025-09-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Social Studies of Science\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://doi.org/10.1177/03063127251364509\",\"RegionNum\":2,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"HISTORY & PHILOSOPHY OF SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Social Studies of Science","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1177/03063127251364509","RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HISTORY & PHILOSOPHY OF SCIENCE","Score":null,"Total":0}
引用次数: 0

摘要

机器学习(ML)算法的不透明性是学术界和监管界关注的一个重要问题。然而,一种新兴的算法社会学认为,不透明性远非算法的固有品质,它是社会构建的,取决于某些选择和决策。在这篇文章中,我们展示了不透明性的价值化是ML专家认知文化的关键组成部分。虽然早期的机械客观性运动将人类专家的不一致性与程序和机器的可靠性进行了对比,但我们发现,当复杂的算法以意想不到的结果“让他们惊讶”时,机器学习专家准确地评估了那些时刻。因此,他们赋予了机器一种神秘的能力,可以根据人类无法掌握的计算和因素做出预测。这样,他们就把不透明从一个问题变成了一种认识上的美德。我们将这种不透明性的增值追溯到ML专业知识出现的司法斗争,并将其与两个竞争对手区分开来:一方面是计算机科学的“人工智能”子领域的“专家系统”类型,另一方面是推理统计。在这些斗争的过程中,机器学习专家吸收了一种默契的、不可言说的人类专业知识理论,将其扩展到包括算法,然后利用这种新获得的不透明度版本,将他们与统计学家之间的差异戏剧化。该分析基于对当今ML专家和数据科学家的60次深入、半结构化和开放式访谈,以及对数据科学起源的历史研究。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
It's not a bug, it's a feature: How AI experts and data scientists account for the opacity of algorithms.
The opacity of machine learning (ML) algorithms is a significant concern in academic and regulatory circles. An emergent sociology of algorithms, however, argues that far from opacity being an inherent quality of algorithms, it is socially constructed and contingent upon certain choices and decisions. In this article, we show that a valorization of opacity is a key component of the epistemic culture of ML experts. While earlier campaigns for mechanical objectivity contrasted the inconsistency of human experts with the reliability of procedures and machines, we found that ML experts valorize precisely those moments when complex algorithms 'surprised' them with unexpected outcomes. They thereby endowed machines with a mysterious capacity to make predictions based on calculations and factors that humans cannot grasp. In this way, they turned opacity from a problem into an epistemic virtue. We trace this valorization of opacity to the jurisdictional struggles through which ML expertise emerged and differentiated itself from its two competitors: the 'expert systems' type of the 'artificial intelligence' sub-field of computer science on the one hand and inferential statistics on the other. In the course of these struggles, ML experts absorbed a theory of human expertise as tacit and inarticulable, extended it to include algorithms, and then leveraged this newly acquired version of opacity to dramatize the differences that separated them from statisticians. The analysis is based on sixty in-depth, semi-structured, and open-ended interviews with ML experts and data scientists working today, as well as historical research on the origins of data science.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Social Studies of Science
Social Studies of Science 管理科学-科学史与科学哲学
CiteScore
5.70
自引率
6.70%
发文量
45
审稿时长
>12 weeks
期刊介绍: Social Studies of Science is an international peer reviewed journal that encourages submissions of original research on science, technology and medicine. The journal is multidisciplinary, publishing work from a range of fields including: political science, sociology, economics, history, philosophy, psychology social anthropology, legal and educational disciplines. This journal is a member of the Committee on Publication Ethics (COPE)
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信