人工智能与数据保护的权利

Ralf Poscher
{"title":"人工智能与数据保护的权利","authors":"Ralf Poscher","doi":"10.2139/SSRN.3769159","DOIUrl":null,"url":null,"abstract":"One way in which the law is often related to new technological developments is as an external restriction. Lawyers are frequently asked whether a new technology is compatible with the law. This implies an asymmetry between technology and the law. Technology appears dynamic, the law stable. We know, however, that this image of the relationship between technology and the law is skewed. The right to data protection itself is an innovative reaction to the law from the early days of mass computing and automated data processing. The paper explores how an essential aspect of AI-technologies, their lack of transparency, might support a different understanding of the right to data protection. From this different perspective, the right to data protection is not regarded as a fundamental right of its own but rather as a doctrinal enhancement of each fundamental right against the abstract dangers of digital data collection and processing. This understanding of the right to data protection shifts the perspective from the individual data processing operation to the data processing system and the abstract dangers connected with it. The systems would not be measured by how they can avoid or justify the processing of some personal data but by the effectiveness of the mechanisms employed to avert the abstract dangers associated with a specific system. This shift in perspective should also allow an assessment of AI-systems despite their lack of transparency.","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"48 10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial Intelligence and the Right to Data Protection\",\"authors\":\"Ralf Poscher\",\"doi\":\"10.2139/SSRN.3769159\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"One way in which the law is often related to new technological developments is as an external restriction. Lawyers are frequently asked whether a new technology is compatible with the law. This implies an asymmetry between technology and the law. Technology appears dynamic, the law stable. We know, however, that this image of the relationship between technology and the law is skewed. The right to data protection itself is an innovative reaction to the law from the early days of mass computing and automated data processing. The paper explores how an essential aspect of AI-technologies, their lack of transparency, might support a different understanding of the right to data protection. From this different perspective, the right to data protection is not regarded as a fundamental right of its own but rather as a doctrinal enhancement of each fundamental right against the abstract dangers of digital data collection and processing. This understanding of the right to data protection shifts the perspective from the individual data processing operation to the data processing system and the abstract dangers connected with it. The systems would not be measured by how they can avoid or justify the processing of some personal data but by the effectiveness of the mechanisms employed to avert the abstract dangers associated with a specific system. This shift in perspective should also allow an assessment of AI-systems despite their lack of transparency.\",\"PeriodicalId\":306343,\"journal\":{\"name\":\"The Cambridge Handbook of Responsible Artificial Intelligence\",\"volume\":\"48 10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Cambridge Handbook of Responsible Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2139/SSRN.3769159\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Cambridge Handbook of Responsible Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/SSRN.3769159","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

法律经常与新技术发展联系在一起的一种方式是作为一种外部限制。律师们经常被问到一项新技术是否符合法律。这意味着技术和法律之间的不对称。技术是动态的,法律是稳定的。然而,我们知道,这种技术与法律之间关系的形象是有偏差的。数据保护权利本身就是对大规模计算和自动化数据处理早期法律的创新反应。本文探讨了人工智能技术的一个重要方面,即缺乏透明度,如何支持对数据保护权利的不同理解。从这一不同的角度来看,数据保护权利本身不被视为一项基本权利,而是从理论上加强了每项基本权利,以防范数字数据收集和处理的抽象危险。这种对数据保护权利的理解将视角从个人数据处理操作转移到数据处理系统以及与之相关的抽象危险。衡量这些制度的标准不是看它们如何避免或证明处理某些个人资料是正当的,而是看为避免与特定制度有关的抽象危险所采用的机制的有效性。这种观点的转变也应该允许对人工智能系统进行评估,尽管它们缺乏透明度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Artificial Intelligence and the Right to Data Protection
One way in which the law is often related to new technological developments is as an external restriction. Lawyers are frequently asked whether a new technology is compatible with the law. This implies an asymmetry between technology and the law. Technology appears dynamic, the law stable. We know, however, that this image of the relationship between technology and the law is skewed. The right to data protection itself is an innovative reaction to the law from the early days of mass computing and automated data processing. The paper explores how an essential aspect of AI-technologies, their lack of transparency, might support a different understanding of the right to data protection. From this different perspective, the right to data protection is not regarded as a fundamental right of its own but rather as a doctrinal enhancement of each fundamental right against the abstract dangers of digital data collection and processing. This understanding of the right to data protection shifts the perspective from the individual data processing operation to the data processing system and the abstract dangers connected with it. The systems would not be measured by how they can avoid or justify the processing of some personal data but by the effectiveness of the mechanisms employed to avert the abstract dangers associated with a specific system. This shift in perspective should also allow an assessment of AI-systems despite their lack of transparency.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信