Shazeda Ahmed, Klaudia Jaźwińska, Archana Ahlawat, Amy Winecoff, Mona Wang
{"title":"实地建设与人工智能安全的认识论文化","authors":"Shazeda Ahmed, Klaudia Jaźwińska, Archana Ahlawat, Amy Winecoff, Mona Wang","doi":"10.5210/fm.v29i4.13626","DOIUrl":null,"url":null,"abstract":"The emerging field of “AI safety” has attracted public attention and large infusions of capital to support its implied promise: the ability to deploy advanced artificial intelligence (AI) while reducing its gravest risks. Ideas from effective altruism, longtermism, and the study of existential risk are foundational to this new field. In this paper, we contend that overlapping communities interested in these ideas have merged into what we refer to as the broader “AI safety epistemic community,” which is sustained through its mutually reinforcing community-building and knowledge production practices. We support this assertion through an analysis of four core sites in this community’s epistemic culture: 1) online community-building through Web forums and career advising; 2) AI forecasting; 3) AI safety research; and 4) prize competitions. The dispersal of this epistemic community’s members throughout the tech industry, academia, and policy organizations ensures their continued input into global discourse about AI. Understanding the epistemic culture that fuses their moral convictions and knowledge claims is crucial to evaluating these claims, which are gaining influence in critical, rapidly changing debates about the harms of AI and how to mitigate them.","PeriodicalId":38833,"journal":{"name":"First Monday","volume":"66 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Field-building and the epistemic culture of AI safety\",\"authors\":\"Shazeda Ahmed, Klaudia Jaźwińska, Archana Ahlawat, Amy Winecoff, Mona Wang\",\"doi\":\"10.5210/fm.v29i4.13626\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The emerging field of “AI safety” has attracted public attention and large infusions of capital to support its implied promise: the ability to deploy advanced artificial intelligence (AI) while reducing its gravest risks. Ideas from effective altruism, longtermism, and the study of existential risk are foundational to this new field. In this paper, we contend that overlapping communities interested in these ideas have merged into what we refer to as the broader “AI safety epistemic community,” which is sustained through its mutually reinforcing community-building and knowledge production practices. We support this assertion through an analysis of four core sites in this community’s epistemic culture: 1) online community-building through Web forums and career advising; 2) AI forecasting; 3) AI safety research; and 4) prize competitions. The dispersal of this epistemic community’s members throughout the tech industry, academia, and policy organizations ensures their continued input into global discourse about AI. Understanding the epistemic culture that fuses their moral convictions and knowledge claims is crucial to evaluating these claims, which are gaining influence in critical, rapidly changing debates about the harms of AI and how to mitigate them.\",\"PeriodicalId\":38833,\"journal\":{\"name\":\"First Monday\",\"volume\":\"66 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"First Monday\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5210/fm.v29i4.13626\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"First Monday","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5210/fm.v29i4.13626","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Computer Science","Score":null,"Total":0}
Field-building and the epistemic culture of AI safety
The emerging field of “AI safety” has attracted public attention and large infusions of capital to support its implied promise: the ability to deploy advanced artificial intelligence (AI) while reducing its gravest risks. Ideas from effective altruism, longtermism, and the study of existential risk are foundational to this new field. In this paper, we contend that overlapping communities interested in these ideas have merged into what we refer to as the broader “AI safety epistemic community,” which is sustained through its mutually reinforcing community-building and knowledge production practices. We support this assertion through an analysis of four core sites in this community’s epistemic culture: 1) online community-building through Web forums and career advising; 2) AI forecasting; 3) AI safety research; and 4) prize competitions. The dispersal of this epistemic community’s members throughout the tech industry, academia, and policy organizations ensures their continued input into global discourse about AI. Understanding the epistemic culture that fuses their moral convictions and knowledge claims is crucial to evaluating these claims, which are gaining influence in critical, rapidly changing debates about the harms of AI and how to mitigate them.
First MondayComputer Science-Computer Networks and Communications
CiteScore
2.20
自引率
0.00%
发文量
86
期刊介绍:
First Monday is one of the first openly accessible, peer–reviewed journals on the Internet, solely devoted to the Internet. Since its start in May 1996, First Monday has published 1,035 papers in 164 issues; these papers were written by 1,316 different authors. In addition, eight special issues have appeared. The most recent special issue was entitled A Web site with a view — The Third World on First Monday and it was edited by Eduardo Villanueva Mansilla. First Monday is indexed in Communication Abstracts, Computer & Communications Security Abstracts, DoIS, eGranary Digital Library, INSPEC, Information Science & Technology Abstracts, LISA, PAIS, and other services.