{"title":"这不是一个漏洞,而是一个特征:人工智能专家和数据科学家如何解释算法的不透明性。","authors":"Netta Avnoon,Gil Eyal","doi":"10.1177/03063127251364509","DOIUrl":null,"url":null,"abstract":"The opacity of machine learning (ML) algorithms is a significant concern in academic and regulatory circles. An emergent sociology of algorithms, however, argues that far from opacity being an inherent quality of algorithms, it is socially constructed and contingent upon certain choices and decisions. In this article, we show that a valorization of opacity is a key component of the epistemic culture of ML experts. While earlier campaigns for mechanical objectivity contrasted the inconsistency of human experts with the reliability of procedures and machines, we found that ML experts valorize precisely those moments when complex algorithms 'surprised' them with unexpected outcomes. They thereby endowed machines with a mysterious capacity to make predictions based on calculations and factors that humans cannot grasp. In this way, they turned opacity from a problem into an epistemic virtue. We trace this valorization of opacity to the jurisdictional struggles through which ML expertise emerged and differentiated itself from its two competitors: the 'expert systems' type of the 'artificial intelligence' sub-field of computer science on the one hand and inferential statistics on the other. In the course of these struggles, ML experts absorbed a theory of human expertise as tacit and inarticulable, extended it to include algorithms, and then leveraged this newly acquired version of opacity to dramatize the differences that separated them from statisticians. The analysis is based on sixty in-depth, semi-structured, and open-ended interviews with ML experts and data scientists working today, as well as historical research on the origins of data science.","PeriodicalId":51152,"journal":{"name":"Social Studies of Science","volume":"75 1","pages":"3063127251364509"},"PeriodicalIF":2.7000,"publicationDate":"2025-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"It's not a bug, it's a feature: How AI experts and data scientists account for the opacity of algorithms.\",\"authors\":\"Netta Avnoon,Gil Eyal\",\"doi\":\"10.1177/03063127251364509\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The opacity of machine learning (ML) algorithms is a significant concern in academic and regulatory circles. An emergent sociology of algorithms, however, argues that far from opacity being an inherent quality of algorithms, it is socially constructed and contingent upon certain choices and decisions. In this article, we show that a valorization of opacity is a key component of the epistemic culture of ML experts. While earlier campaigns for mechanical objectivity contrasted the inconsistency of human experts with the reliability of procedures and machines, we found that ML experts valorize precisely those moments when complex algorithms 'surprised' them with unexpected outcomes. They thereby endowed machines with a mysterious capacity to make predictions based on calculations and factors that humans cannot grasp. In this way, they turned opacity from a problem into an epistemic virtue. We trace this valorization of opacity to the jurisdictional struggles through which ML expertise emerged and differentiated itself from its two competitors: the 'expert systems' type of the 'artificial intelligence' sub-field of computer science on the one hand and inferential statistics on the other. In the course of these struggles, ML experts absorbed a theory of human expertise as tacit and inarticulable, extended it to include algorithms, and then leveraged this newly acquired version of opacity to dramatize the differences that separated them from statisticians. The analysis is based on sixty in-depth, semi-structured, and open-ended interviews with ML experts and data scientists working today, as well as historical research on the origins of data science.\",\"PeriodicalId\":51152,\"journal\":{\"name\":\"Social Studies of Science\",\"volume\":\"75 1\",\"pages\":\"3063127251364509\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2025-09-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Social Studies of Science\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://doi.org/10.1177/03063127251364509\",\"RegionNum\":2,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"HISTORY & PHILOSOPHY OF SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Social Studies of Science","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1177/03063127251364509","RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HISTORY & PHILOSOPHY OF SCIENCE","Score":null,"Total":0}
It's not a bug, it's a feature: How AI experts and data scientists account for the opacity of algorithms.
The opacity of machine learning (ML) algorithms is a significant concern in academic and regulatory circles. An emergent sociology of algorithms, however, argues that far from opacity being an inherent quality of algorithms, it is socially constructed and contingent upon certain choices and decisions. In this article, we show that a valorization of opacity is a key component of the epistemic culture of ML experts. While earlier campaigns for mechanical objectivity contrasted the inconsistency of human experts with the reliability of procedures and machines, we found that ML experts valorize precisely those moments when complex algorithms 'surprised' them with unexpected outcomes. They thereby endowed machines with a mysterious capacity to make predictions based on calculations and factors that humans cannot grasp. In this way, they turned opacity from a problem into an epistemic virtue. We trace this valorization of opacity to the jurisdictional struggles through which ML expertise emerged and differentiated itself from its two competitors: the 'expert systems' type of the 'artificial intelligence' sub-field of computer science on the one hand and inferential statistics on the other. In the course of these struggles, ML experts absorbed a theory of human expertise as tacit and inarticulable, extended it to include algorithms, and then leveraged this newly acquired version of opacity to dramatize the differences that separated them from statisticians. The analysis is based on sixty in-depth, semi-structured, and open-ended interviews with ML experts and data scientists working today, as well as historical research on the origins of data science.
期刊介绍:
Social Studies of Science is an international peer reviewed journal that encourages submissions of original research on science, technology and medicine. The journal is multidisciplinary, publishing work from a range of fields including: political science, sociology, economics, history, philosophy, psychology social anthropology, legal and educational disciplines. This journal is a member of the Committee on Publication Ethics (COPE)