{"title":"用人权来解决人工智能的承诺和挑战","authors":"Onur Bakiner","doi":"10.1177/20539517231205476","DOIUrl":null,"url":null,"abstract":"This paper examines the potential promises and limitations of the human rights framework in the age of AI. It addresses the question: what, if anything, makes human rights well suited to face the challenges arising from new and emerging technologies like AI? It argues that the historical evolution of human rights as a series of legal norms and concrete practices has made it well placed to address AI-related challenges. The human rights framework should be understood comprehensively as a combination of legal remedies, moral justification, and political analysis that inform one another. Over time, the framework has evolved in ways that accommodate the balancing of contending rights claims, using multiple ex ante and ex post facto mechanisms, involving government and/or business actors, and in situations of diffuse responsibility that may or may not result from malicious intent. However, the widespread adoption of AI technologies pushes the moral, sociological, and political boundaries of the human rights framework in other ways. AI reproduces long-term, structural problems going beyond issue-by-issue regulation, is embedded within economic structures that produce cumulative negative effects, and introduces additional challenges that require a discussion about the relationship between human rights and science & technology. Some of the reasons for why AI produces problematic outcomes are deep rooted in technical intricacies that human rights practitioners should be more willing than before to get involved in.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":"27 1","pages":"0"},"PeriodicalIF":6.5000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The promises and challenges of addressing artificial intelligence with human rights\",\"authors\":\"Onur Bakiner\",\"doi\":\"10.1177/20539517231205476\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper examines the potential promises and limitations of the human rights framework in the age of AI. It addresses the question: what, if anything, makes human rights well suited to face the challenges arising from new and emerging technologies like AI? It argues that the historical evolution of human rights as a series of legal norms and concrete practices has made it well placed to address AI-related challenges. The human rights framework should be understood comprehensively as a combination of legal remedies, moral justification, and political analysis that inform one another. Over time, the framework has evolved in ways that accommodate the balancing of contending rights claims, using multiple ex ante and ex post facto mechanisms, involving government and/or business actors, and in situations of diffuse responsibility that may or may not result from malicious intent. However, the widespread adoption of AI technologies pushes the moral, sociological, and political boundaries of the human rights framework in other ways. AI reproduces long-term, structural problems going beyond issue-by-issue regulation, is embedded within economic structures that produce cumulative negative effects, and introduces additional challenges that require a discussion about the relationship between human rights and science & technology. Some of the reasons for why AI produces problematic outcomes are deep rooted in technical intricacies that human rights practitioners should be more willing than before to get involved in.\",\"PeriodicalId\":47834,\"journal\":{\"name\":\"Big Data & Society\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2023-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Big Data & Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/20539517231205476\",\"RegionNum\":1,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"SOCIAL SCIENCES, INTERDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Big Data & Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/20539517231205476","RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SOCIAL SCIENCES, INTERDISCIPLINARY","Score":null,"Total":0}
The promises and challenges of addressing artificial intelligence with human rights
This paper examines the potential promises and limitations of the human rights framework in the age of AI. It addresses the question: what, if anything, makes human rights well suited to face the challenges arising from new and emerging technologies like AI? It argues that the historical evolution of human rights as a series of legal norms and concrete practices has made it well placed to address AI-related challenges. The human rights framework should be understood comprehensively as a combination of legal remedies, moral justification, and political analysis that inform one another. Over time, the framework has evolved in ways that accommodate the balancing of contending rights claims, using multiple ex ante and ex post facto mechanisms, involving government and/or business actors, and in situations of diffuse responsibility that may or may not result from malicious intent. However, the widespread adoption of AI technologies pushes the moral, sociological, and political boundaries of the human rights framework in other ways. AI reproduces long-term, structural problems going beyond issue-by-issue regulation, is embedded within economic structures that produce cumulative negative effects, and introduces additional challenges that require a discussion about the relationship between human rights and science & technology. Some of the reasons for why AI produces problematic outcomes are deep rooted in technical intricacies that human rights practitioners should be more willing than before to get involved in.
期刊介绍:
Big Data & Society (BD&S) is an open access, peer-reviewed scholarly journal that publishes interdisciplinary work principally in the social sciences, humanities, and computing and their intersections with the arts and natural sciences. The journal focuses on the implications of Big Data for societies and aims to connect debates about Big Data practices and their effects on various sectors such as academia, social life, industry, business, and government.
BD&S considers Big Data as an emerging field of practices, not solely defined by but generative of unique data qualities such as high volume, granularity, data linking, and mining. The journal pays attention to digital content generated both online and offline, encompassing social media, search engines, closed networks (e.g., commercial or government transactions), and open networks like digital archives, open government, and crowdsourced data. Rather than providing a fixed definition of Big Data, BD&S encourages interdisciplinary inquiries, debates, and studies on various topics and themes related to Big Data practices.
BD&S seeks contributions that analyze Big Data practices, involve empirical engagements and experiments with innovative methods, and reflect on the consequences of these practices for the representation, realization, and governance of societies. As a digital-only journal, BD&S's platform can accommodate multimedia formats such as complex images, dynamic visualizations, videos, and audio content. The contents of the journal encompass peer-reviewed research articles, colloquia, bookcasts, think pieces, state-of-the-art methods, and work by early career researchers.