The Cambridge Handbook of Responsible Artificial Intelligence最新文献

筛选
英文 中文
Artificial Intelligence and the Right to Data Protection 人工智能与数据保护的权利
The Cambridge Handbook of Responsible Artificial Intelligence Pub Date : 2021-01-19 DOI: 10.2139/SSRN.3769159
Ralf Poscher
{"title":"Artificial Intelligence and the Right to Data Protection","authors":"Ralf Poscher","doi":"10.2139/SSRN.3769159","DOIUrl":"https://doi.org/10.2139/SSRN.3769159","url":null,"abstract":"One way in which the law is often related to new technological developments is as an external restriction. Lawyers are frequently asked whether a new technology is compatible with the law. This implies an asymmetry between technology and the law. Technology appears dynamic, the law stable. We know, however, that this image of the relationship between technology and the law is skewed. The right to data protection itself is an innovative reaction to the law from the early days of mass computing and automated data processing. The paper explores how an essential aspect of AI-technologies, their lack of transparency, might support a different understanding of the right to data protection. From this different perspective, the right to data protection is not regarded as a fundamental right of its own but rather as a doctrinal enhancement of each fundamental right against the abstract dangers of digital data collection and processing. This understanding of the right to data protection shifts the perspective from the individual data processing operation to the data processing system and the abstract dangers connected with it. The systems would not be measured by how they can avoid or justify the processing of some personal data but by the effectiveness of the mechanisms employed to avert the abstract dangers associated with a specific system. This shift in perspective should also allow an assessment of AI-systems despite their lack of transparency.","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"48 10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116311424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data Governance and Trust: Lessons from South Korean Experiences Coping with COVID-19 数据治理与信任:韩国应对COVID-19的经验教训
The Cambridge Handbook of Responsible Artificial Intelligence Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.024
Sangchul Park, Yong Lim, Haksoo Ko
{"title":"Data Governance and Trust: Lessons from South Korean Experiences Coping with COVID-19","authors":"Sangchul Park, Yong Lim, Haksoo Ko","doi":"10.1017/9781009207898.024","DOIUrl":"https://doi.org/10.1017/9781009207898.024","url":null,"abstract":"","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128462404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
China's Normative Systems for Responsible AI: From Soft Law to Hard Law 中国负责任人工智能的规范体系:从软法到硬法
The Cambridge Handbook of Responsible Artificial Intelligence Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.012
Weixing Shen, Yun Liu
{"title":"China's Normative Systems for Responsible AI: From Soft Law to Hard Law","authors":"Weixing Shen, Yun Liu","doi":"10.1017/9781009207898.012","DOIUrl":"https://doi.org/10.1017/9781009207898.012","url":null,"abstract":"Progress in Artificial Intelligence (AI) technology has brought us novel experiences in many fields and has profoundly changed industrial production, social governance, public services, business marketing, and consumer experience. Currently, a number of AI technology products or services have been successfully produced in the fields of industrial intelligence, smart cities, self-driving cars, smart courts, intelligent recommendations, facial recognition applications, smart investment consultants, and intelligent robots. At the same time, the risks of fairness, transparency, and stability of AI have also posed widespread concerns among regulators and the public. We might have to endure security risks when enjoying the benefits brought by AI development, or otherwise to bridge the gap between innovation and security for the sustainable development of AI. The Notice of the State Council on Issuing the Development Plan on the New Generation of Artificial Intelligence declares that China is devoted to becoming one of the world’s major AI innovation centers. It lists four dimensions of construction goals: AI theory and technology systems, industry competitiveness, scientific innovation and talent cultivation, and governance norms and policy framework. Specifically, by 2020, initial steps to build AI ethical norms and policies and legislation in related fields has been completed; by 2025, initial steps to establish AI laws and regulations, ethical norms and policy framework, and to develop AI security assessment and governance capabilities shall be accomplished; and by 2030, more complete AI laws and regulations, ethical norms, and policy systems shall be accomplished. Under the guidance of the plan, all relevant departments in Chinese authorities are actively building a normative governance system with equal emphasis on soft and hard laws. This chapter focuses on China’s efforts in the area of responsible AI, mainly from the perspective of the evolution of the normative system, and it introduces some recent legislative actions. The chapter proceeds mainly in two parts. In the first part, we would present the process of development from soft law to hard law through a comprehensive view on the normative system of responsible AI in China. In the second part, we set out a legal framework for responsible AI with four dimensions: data, algorithms, platforms, and application scenarios, based on statutory requirements for responsible AI in China in terms of existing and developing","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129157220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fostering the Common Good: An Adaptive Approach Regulating High-Risk AI-Driven Products and Services 促进共同利益:一种监管高风险人工智能驱动产品和服务的适应性方法
The Cambridge Handbook of Responsible Artificial Intelligence Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.011
Thorsten Schmidt, S. Voeneky
{"title":"Fostering the Common Good: An Adaptive Approach Regulating High-Risk AI-Driven Products and Services","authors":"Thorsten Schmidt, S. Voeneky","doi":"10.1017/9781009207898.011","DOIUrl":"https://doi.org/10.1017/9781009207898.011","url":null,"abstract":"","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129483386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Artificial Intelligence: Key Technologies and Opportunities 人工智能:关键技术与机遇
The Cambridge Handbook of Responsible Artificial Intelligence Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.003
Wolfram Burgard
{"title":"Artificial Intelligence: Key Technologies and Opportunities","authors":"Wolfram Burgard","doi":"10.1017/9781009207898.003","DOIUrl":"https://doi.org/10.1017/9781009207898.003","url":null,"abstract":"","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121686564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Forward to the Past: A Critical Evaluation of the European Approach to Artificial Intelligence in Private International Law 展望过去:对欧洲国际私法中人工智能方法的批判性评价
The Cambridge Handbook of Responsible Artificial Intelligence Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.017
J. Hein
{"title":"Forward to the Past: A Critical Evaluation of the European Approach to Artificial Intelligence in Private International Law","authors":"J. Hein","doi":"10.1017/9781009207898.017","DOIUrl":"https://doi.org/10.1017/9781009207898.017","url":null,"abstract":"","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127999553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discriminatory AI and the Law: Legal Standards for Algorithmic Profiling 歧视性人工智能与法律:算法分析的法律标准
The Cambridge Handbook of Responsible Artificial Intelligence Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.020
A. Ungern-Sternberg
{"title":"Discriminatory AI and the Law: Legal Standards for Algorithmic Profiling","authors":"A. Ungern-Sternberg","doi":"10.1017/9781009207898.020","DOIUrl":"https://doi.org/10.1017/9781009207898.020","url":null,"abstract":"","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130492725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Risk Imposition by Artificial Agents: The Moral Proxy Problem 人工代理的风险强加:道德代理问题
The Cambridge Handbook of Responsible Artificial Intelligence Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.006
J. Thoma
{"title":"Risk Imposition by Artificial Agents: The Moral Proxy Problem","authors":"J. Thoma","doi":"10.1017/9781009207898.006","DOIUrl":"https://doi.org/10.1017/9781009207898.006","url":null,"abstract":"It seems undeniable that the coming years will see an ever-increasing reliance on artificial agents that are, on the one hand, autonomous in the sense that they process information and make decisions without continuous human input, and, on the other hand, fall short of the kind of agency that would warrant ascribing moral responsibility to the artificial agent itself. What I have in mind here are artificial agents such as self-driving cars, artificial trading agents in financial markets, nursebots or robot teachers. As these examples illustrate, many such agents make 1","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"1036 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123131664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Artificial Intelligence, Law, and National Security 人工智能,法律和国家安全
The Cambridge Handbook of Responsible Artificial Intelligence Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.035
Ebrahim Afsah
{"title":"Artificial Intelligence, Law, and National Security","authors":"Ebrahim Afsah","doi":"10.1017/9781009207898.035","DOIUrl":"https://doi.org/10.1017/9781009207898.035","url":null,"abstract":"","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123954225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autonomization and Antitrust: On the Construal of the Cartel Prohibition in the Light of Algorithmic Collusion 自治与反垄断:基于算法合谋的卡特尔禁令解读
The Cambridge Handbook of Responsible Artificial Intelligence Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.027
Stefan Thomas
{"title":"Autonomization and Antitrust: On the Construal of the Cartel Prohibition in the Light of Algorithmic Collusion","authors":"Stefan Thomas","doi":"10.1017/9781009207898.027","DOIUrl":"https://doi.org/10.1017/9781009207898.027","url":null,"abstract":"","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130836740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信