Artificial Intelligence, LLC: Corporate Personhood as Tort Reform

Alicia Lai
{"title":"Artificial Intelligence, LLC: Corporate Personhood as Tort Reform","authors":"Alicia Lai","doi":"10.2139/ssrn.3677360","DOIUrl":null,"url":null,"abstract":"Our legal system has long tried to fit the square peg of artificial intelligence (AI) technologies into the round hole of the current tort regime, overlooking the inability of traditional liability schemes to address the nuances of how AI technology creates harms. The current tort regime deals out rough justice—using strict liability for some AI products and using the negligence rule for other AI services—both of which are insufficiently tailored to achieve public policy objectives. \n \nUnder a strict liability regime where manufacturers are always held liable for the faults of their technology regardless of knowledge or precautionary measures, firms are incentivized to play it safe and stifle innovation. But even with this cautionary stance, the goals of strict liability cannot be met due to the unique nature of AI technology: its mistakes are merely “efficient errors”—they appropriately surpass the human baseline, they are game theory problems intended for a jury, they are necessary to train a robust system, or they are harmless but misclassified. \n \nUnder a negligence liability regime where the onus falls entirely on consumers to prove the element of causation, victimized consumers are left without sufficient recourse or compensation. Many critiques have been leveled against the “black-box” nature of algorithms. \n \nThis paper proposes a new framework to regulate artificial intelligence technologies: bestowing corporate personhood to AI systems. First, the corporate personality trait of “limited liability” strikes an optimal balance in determining liability—it would both compensate victims (for instance, through obligations to carry insurance and a straightforward burden of causation) while holding manufacturers responsible only when the infraction is egregious (for instance, through veil-piercing). Second, corporate personhood is “divisible”—meaning not all corporate personality traits need to be granted—which circumvents many of the philosophical criticisms of giving AI the complete set of rights of full legal personhood.","PeriodicalId":431428,"journal":{"name":"Corporate Law: LLCs","volume":"120 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Corporate Law: LLCs","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3677360","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Our legal system has long tried to fit the square peg of artificial intelligence (AI) technologies into the round hole of the current tort regime, overlooking the inability of traditional liability schemes to address the nuances of how AI technology creates harms. The current tort regime deals out rough justice—using strict liability for some AI products and using the negligence rule for other AI services—both of which are insufficiently tailored to achieve public policy objectives. Under a strict liability regime where manufacturers are always held liable for the faults of their technology regardless of knowledge or precautionary measures, firms are incentivized to play it safe and stifle innovation. But even with this cautionary stance, the goals of strict liability cannot be met due to the unique nature of AI technology: its mistakes are merely “efficient errors”—they appropriately surpass the human baseline, they are game theory problems intended for a jury, they are necessary to train a robust system, or they are harmless but misclassified. Under a negligence liability regime where the onus falls entirely on consumers to prove the element of causation, victimized consumers are left without sufficient recourse or compensation. Many critiques have been leveled against the “black-box” nature of algorithms. This paper proposes a new framework to regulate artificial intelligence technologies: bestowing corporate personhood to AI systems. First, the corporate personality trait of “limited liability” strikes an optimal balance in determining liability—it would both compensate victims (for instance, through obligations to carry insurance and a straightforward burden of causation) while holding manufacturers responsible only when the infraction is egregious (for instance, through veil-piercing). Second, corporate personhood is “divisible”—meaning not all corporate personality traits need to be granted—which circumvents many of the philosophical criticisms of giving AI the complete set of rights of full legal personhood.
人工智能有限责任公司:作为侵权改革的公司人格
长期以来,我们的法律体系一直试图将人工智能(AI)技术的方方面面融入当前侵权制度的圆孔中,忽视了传统责任计划无法解决人工智能技术如何造成伤害的细微差别。目前的侵权制度对一些人工智能产品使用严格责任,对其他人工智能服务使用过失规则,这两种制度都不足以实现公共政策目标。在一个严格的责任制度下,制造商总是要为他们的技术缺陷负责,而不管知识或预防措施如何,企业被激励要小心行事,扼杀创新。但即使有这种谨慎的立场,严格责任的目标也无法实现,因为人工智能技术的独特性:它的错误仅仅是“有效的错误”——它们适当地超过了人类的底线,它们是为陪审团设计的博弈论问题,它们是训练一个强大系统所必需的,或者它们是无害的,但被错误分类。在过失责任制度下,证明因果关系的责任完全落在消费者身上,受害的消费者没有足够的追索权或赔偿。许多批评针对算法的“黑箱”性质。本文提出了一个规范人工智能技术的新框架:赋予人工智能系统企业人格。首先,“有限责任”的企业人格特征在确定责任方面达到了最佳平衡——它既会赔偿受害者(例如,通过承担保险义务和直接的因果关系负担),又只在违规行为严重时(例如,通过穿面纱)让制造商承担责任。其次,公司人格是“可分割的”——这意味着并非所有的公司人格特征都需要被授予——这规避了许多关于赋予人工智能完整的法人人格权利的哲学批评。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信