{"title":"确定人工智能法规的范围","authors":"Jonas Schuett","doi":"10.2139/ssrn.3453632","DOIUrl":null,"url":null,"abstract":"ABSTRACT\n The paper argues that the material scope of AI regulations should not rely on the term ‘artificial intelligence (AI)’. The argument is developed by proposing a number of requirements for legal definitions, surveying existing AI definitions, and then discussing the extent to which they meet the proposed requirements. It is shown that existing definitions of AI do not meet the most important requirements for legal definitions. Next, the paper argues that a risk-based approach would be preferable. Rather than using the term AI, policy makers should focus on the specific risks they want to reduce. It is shown that the requirements for legal definitions can be better met by defining the main sources of relevant risks: certain technical approaches (e.g. reinforcement learning), applications (e.g. facial recognition), and capabilities (e.g. the ability to physically interact with the environment). Finally, the paper discusses the extent to which this approach can also be applied to more advanced AI systems.","PeriodicalId":37639,"journal":{"name":"Law, Innovation and Technology","volume":"1 1","pages":"60 - 82"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"19","resultStr":"{\"title\":\"Defining the scope of AI regulations\",\"authors\":\"Jonas Schuett\",\"doi\":\"10.2139/ssrn.3453632\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT\\n The paper argues that the material scope of AI regulations should not rely on the term ‘artificial intelligence (AI)’. The argument is developed by proposing a number of requirements for legal definitions, surveying existing AI definitions, and then discussing the extent to which they meet the proposed requirements. It is shown that existing definitions of AI do not meet the most important requirements for legal definitions. Next, the paper argues that a risk-based approach would be preferable. Rather than using the term AI, policy makers should focus on the specific risks they want to reduce. It is shown that the requirements for legal definitions can be better met by defining the main sources of relevant risks: certain technical approaches (e.g. reinforcement learning), applications (e.g. facial recognition), and capabilities (e.g. the ability to physically interact with the environment). Finally, the paper discusses the extent to which this approach can also be applied to more advanced AI systems.\",\"PeriodicalId\":37639,\"journal\":{\"name\":\"Law, Innovation and Technology\",\"volume\":\"1 1\",\"pages\":\"60 - 82\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-08-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"19\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Law, Innovation and Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2139/ssrn.3453632\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Law, Innovation and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3453632","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
ABSTRACT
The paper argues that the material scope of AI regulations should not rely on the term ‘artificial intelligence (AI)’. The argument is developed by proposing a number of requirements for legal definitions, surveying existing AI definitions, and then discussing the extent to which they meet the proposed requirements. It is shown that existing definitions of AI do not meet the most important requirements for legal definitions. Next, the paper argues that a risk-based approach would be preferable. Rather than using the term AI, policy makers should focus on the specific risks they want to reduce. It is shown that the requirements for legal definitions can be better met by defining the main sources of relevant risks: certain technical approaches (e.g. reinforcement learning), applications (e.g. facial recognition), and capabilities (e.g. the ability to physically interact with the environment). Finally, the paper discusses the extent to which this approach can also be applied to more advanced AI systems.
期刊介绍:
Stem cell research, cloning, GMOs ... How do regulations affect such emerging technologies? What impact do new technologies have on law? And can we rely on technology itself as a regulatory tool? The meeting of law and technology is rapidly becoming an increasingly significant (and controversial) topic. Law, Innovation and Technology is, however, the only journal to engage fully with it, setting an innovative and distinctive agenda for lawyers, ethicists and policy makers. Spanning ICTs, biotechnologies, nanotechnologies, neurotechnologies, robotics and AI, it offers a unique forum for the highest level of reflection on this essential area.