你可能是个机器人

B. Casey, Mark A. Lemley
{"title":"你可能是个机器人","authors":"B. Casey, Mark A. Lemley","doi":"10.2139/SSRN.3327602","DOIUrl":null,"url":null,"abstract":"As robots and artificial intelligence (AI) increase their influence over society, policymakers are increasingly regulating them. But to regulate these technologies, we first need to know what they are. And here we come to a problem. No one has been able to offer a decent definition of robots and AI — not even experts. What’s more, technological advances make it harder and harder each day to tell people from robots and robots from “dumb” machines. We’ve already seen disastrous legal definitions written with one target in mind inadvertently affecting others. In fact, if you’re reading this you’re (probably) not a robot, but certain laws might already treat you as one. \n \nDefinitional challenges like these aren’t exclusive to robots and AI. But today, all signs indicate we’re approaching an inflection point. Whether it’s citywide bans of “robot sex brothels” or nationwide efforts to crack down on “ticket scalping bots,” we’re witnessing an explosion of interest in regulating robots, human enhancement technologies, and all things in between. And that, in turn, means that typological quandaries once confined to philosophy seminars can no longer be dismissed as academic. Want, for example, to crack down on foreign “influence campaigns” by regulating social media bots? Be careful not to define “bot” too broadly (like the California legislature recently did), or the supercomputer nestled in your pocket might just make you one. Want, instead, to promote traffic safety by regulating drivers? Be careful not to presume that only humans can drive (as our Federal Motor Vehicle Safety Standards do), or you may soon exclude the best drivers on the road. \n \nIn this Article, we suggest that the problem isn’t simply that we haven’t hit upon the right definition. Instead, there may not be a “right” definition for the multifaceted, rapidly evolving technologies we call robots or AI. As we’ll demonstrate, even the most thoughtful of definitions risk being overbroad, underinclusive, or simply irrelevant in short order. Rather than trying in vain to find the perfect definition, we instead argue that policymakers should do as the great computer scientist, Alan Turing, did when confronted with the challenge of defining robots: embrace their ineffable nature. We offer several strategies to do so. First, whenever possible, laws should regulate behavior, not things (or as we put it, regulate verbs, not nouns). Second, where we must distinguish robots from other entities, the law should apply what we call Turing’s Razor, identifying robots on a case-by-case basis. Third, we offer six functional criteria for making these types of “I know it when I see it” determinations and argue that courts are generally better positioned than legislators to apply such standards. Finally, we argue that if we must have definitions rather than apply standards, they should be as short-term and contingent as possible. That, in turn, suggests regulators—not legislators—should play the defining role.","PeriodicalId":434487,"journal":{"name":"European Economics: Microeconomics & Industrial Organization eJournal","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"You Might Be a Robot\",\"authors\":\"B. Casey, Mark A. Lemley\",\"doi\":\"10.2139/SSRN.3327602\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As robots and artificial intelligence (AI) increase their influence over society, policymakers are increasingly regulating them. But to regulate these technologies, we first need to know what they are. And here we come to a problem. No one has been able to offer a decent definition of robots and AI — not even experts. What’s more, technological advances make it harder and harder each day to tell people from robots and robots from “dumb” machines. We’ve already seen disastrous legal definitions written with one target in mind inadvertently affecting others. In fact, if you’re reading this you’re (probably) not a robot, but certain laws might already treat you as one. \\n \\nDefinitional challenges like these aren’t exclusive to robots and AI. But today, all signs indicate we’re approaching an inflection point. Whether it’s citywide bans of “robot sex brothels” or nationwide efforts to crack down on “ticket scalping bots,” we’re witnessing an explosion of interest in regulating robots, human enhancement technologies, and all things in between. And that, in turn, means that typological quandaries once confined to philosophy seminars can no longer be dismissed as academic. Want, for example, to crack down on foreign “influence campaigns” by regulating social media bots? Be careful not to define “bot” too broadly (like the California legislature recently did), or the supercomputer nestled in your pocket might just make you one. Want, instead, to promote traffic safety by regulating drivers? Be careful not to presume that only humans can drive (as our Federal Motor Vehicle Safety Standards do), or you may soon exclude the best drivers on the road. \\n \\nIn this Article, we suggest that the problem isn’t simply that we haven’t hit upon the right definition. Instead, there may not be a “right” definition for the multifaceted, rapidly evolving technologies we call robots or AI. As we’ll demonstrate, even the most thoughtful of definitions risk being overbroad, underinclusive, or simply irrelevant in short order. Rather than trying in vain to find the perfect definition, we instead argue that policymakers should do as the great computer scientist, Alan Turing, did when confronted with the challenge of defining robots: embrace their ineffable nature. We offer several strategies to do so. First, whenever possible, laws should regulate behavior, not things (or as we put it, regulate verbs, not nouns). Second, where we must distinguish robots from other entities, the law should apply what we call Turing’s Razor, identifying robots on a case-by-case basis. Third, we offer six functional criteria for making these types of “I know it when I see it” determinations and argue that courts are generally better positioned than legislators to apply such standards. Finally, we argue that if we must have definitions rather than apply standards, they should be as short-term and contingent as possible. That, in turn, suggests regulators—not legislators—should play the defining role.\",\"PeriodicalId\":434487,\"journal\":{\"name\":\"European Economics: Microeconomics & Industrial Organization eJournal\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"European Economics: Microeconomics & Industrial Organization eJournal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2139/SSRN.3327602\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Economics: Microeconomics & Industrial Organization eJournal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/SSRN.3327602","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

随着机器人和人工智能(AI)对社会的影响越来越大,政策制定者越来越多地对它们进行监管。但要规范这些技术,我们首先需要知道它们是什么。这里有一个问题。没有人能给机器人和人工智能下一个像样的定义——甚至专家也不能。更重要的是,技术的进步使得人们越来越难以区分机器人和“愚蠢”的机器。我们已经看到了一些灾难性的法律定义,其中一个目标在不经意间影响了其他目标。事实上,如果你正在读这篇文章,你(很可能)不是机器人,但某些法律可能已经把你当成了机器人。像这样的定义挑战并不是机器人和人工智能所独有的。但今天,所有迹象都表明,我们正在接近一个拐点。无论是全市范围内禁止“机器人色情妓院”,还是全国范围内打击“倒卖门票的机器人”,我们都见证了人们对监管机器人、人类增强技术以及介于两者之间的所有事物的兴趣激增。反过来,这意味着曾经局限于哲学研讨会的类型学困境不能再被视为学术问题而不予理会。比如,想要通过监管社交媒体机器人来打击外国的“影响力活动”吗?小心不要把“机器人”定义得太宽泛(就像加州立法机构最近做的那样),否则你口袋里的超级计算机可能会让你成为一个机器人。相反,想要通过规范司机来促进交通安全吗?要小心,不要想当然地认为只有人类才能开车(就像我们的联邦机动车安全标准所做的那样),否则你可能很快就会把最好的司机排除在道路上。在本文中,我们认为问题不仅仅是我们没有找到正确的定义。相反,对于我们称之为机器人或人工智能的多面、快速发展的技术,可能没有一个“正确”的定义。正如我们将演示的那样,即使是最深思熟虑的定义也有可能在短时间内过于宽泛、不全面或根本不相关。与其徒劳地寻找完美的定义,我们认为,政策制定者应该像伟大的计算机科学家艾伦·图灵(Alan Turing)在面临定义机器人的挑战时所做的那样:拥抱它们难以形容的本质。我们提供了一些策略来做到这一点。首先,只要有可能,法律应该规范行为,而不是事物(或者用我们的话说,规范动词,而不是名词)。其次,在我们必须将机器人与其他实体区分开来的地方,法律应该运用我们所说的图灵剃刀,根据具体情况来识别机器人。第三,我们提供了六个功能性标准,用于做出这些类型的“当我看到它时我就知道它”的决定,并认为法院通常比立法者更适合应用这些标准。最后,我们认为,如果我们必须有定义而不是应用标准,那么它们应该尽可能是短期的和偶然的。反过来,这表明监管者——而不是立法者——应该扮演决定性的角色。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
You Might Be a Robot
As robots and artificial intelligence (AI) increase their influence over society, policymakers are increasingly regulating them. But to regulate these technologies, we first need to know what they are. And here we come to a problem. No one has been able to offer a decent definition of robots and AI — not even experts. What’s more, technological advances make it harder and harder each day to tell people from robots and robots from “dumb” machines. We’ve already seen disastrous legal definitions written with one target in mind inadvertently affecting others. In fact, if you’re reading this you’re (probably) not a robot, but certain laws might already treat you as one. Definitional challenges like these aren’t exclusive to robots and AI. But today, all signs indicate we’re approaching an inflection point. Whether it’s citywide bans of “robot sex brothels” or nationwide efforts to crack down on “ticket scalping bots,” we’re witnessing an explosion of interest in regulating robots, human enhancement technologies, and all things in between. And that, in turn, means that typological quandaries once confined to philosophy seminars can no longer be dismissed as academic. Want, for example, to crack down on foreign “influence campaigns” by regulating social media bots? Be careful not to define “bot” too broadly (like the California legislature recently did), or the supercomputer nestled in your pocket might just make you one. Want, instead, to promote traffic safety by regulating drivers? Be careful not to presume that only humans can drive (as our Federal Motor Vehicle Safety Standards do), or you may soon exclude the best drivers on the road. In this Article, we suggest that the problem isn’t simply that we haven’t hit upon the right definition. Instead, there may not be a “right” definition for the multifaceted, rapidly evolving technologies we call robots or AI. As we’ll demonstrate, even the most thoughtful of definitions risk being overbroad, underinclusive, or simply irrelevant in short order. Rather than trying in vain to find the perfect definition, we instead argue that policymakers should do as the great computer scientist, Alan Turing, did when confronted with the challenge of defining robots: embrace their ineffable nature. We offer several strategies to do so. First, whenever possible, laws should regulate behavior, not things (or as we put it, regulate verbs, not nouns). Second, where we must distinguish robots from other entities, the law should apply what we call Turing’s Razor, identifying robots on a case-by-case basis. Third, we offer six functional criteria for making these types of “I know it when I see it” determinations and argue that courts are generally better positioned than legislators to apply such standards. Finally, we argue that if we must have definitions rather than apply standards, they should be as short-term and contingent as possible. That, in turn, suggests regulators—not legislators—should play the defining role.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信