Combatting AI’s Protectionism & Totalitarian-Coded Hypnosis: The Case for AI Reparations & Antitrust Remedies in the Ecology of Collective Self-Determination

Maurice R. Dyson
{"title":"Combatting AI’s Protectionism & Totalitarian-Coded Hypnosis: The Case for AI Reparations & Antitrust Remedies in the Ecology of Collective Self-Determination","authors":"Maurice R. Dyson","doi":"10.25172/smulr.75.3.7","DOIUrl":null,"url":null,"abstract":"Artificial Intelligence’s (AI) global race for comparative advantage has the world spinning, while leaving people of color and the poor rushing to reinvent AI imagination in less racist, destructive ways. In repurposing AI technology, we can look to close the national racial gaps in academic achievement, healthcare, housing, income, and fairness in the criminal justice system to conceive what AI reparations can fairly look like. AI can create a fantasy world, realizing goods we previously thought impossible. However, if AI does not close these national gaps, it no longer has foreseeable or practical social utility value compared to its foreseeable and actual grave social harm. The hypothetical promises of AI’s beneficial use as an equality machine without the requisite action and commitment to address the inequality it already causes now is fantastic propaganda masquerading as merit for a Silicon Valley that has yet to diversify its own ranks or undo the harm it is already causing. Care must be taken that fanciful imagining yields to practical realities that, in many cases, AI no longer has foreseeable practical social utility when compared to the harm it poses to democracy, privacy, equality, personhood and global warming. Until we can accept as a nation that the Sherman Antitrust Act of 1890 and the Clayton Antitrust Act of 1914 are not up to the task for breaking up tech companies; until we can acknowledge DOJ and FTC regulators are constrained from using their power because of a framework of permissibility implicit in the “consumer welfare standard” of antitrust law; until a conservative judiciary inclined to defer to that paradigm ceases its enabling of big tech, then workers, students, and all natural persons will continue to be harmed by big tech’s anticompetitive and inhumane activity. Accordingly, AI should be vigorously subject to anti-trust monopolistic protections and corporate, contractual, and tort liability explored herein, such as strict liability or a new AI prima facie tort that can pierce the corporate and technological veil of algorithmic proprietary secrecy in the interest of justice. And when appropriate, AI implementation should be phased out for a later time when we have better command and control of how to eliminate its harmful impacts that will only exacerbate existing inequities. Fourth Amendment jurisprudence of a totalitarian tenor—greatly helped by Terry v. Ohio—has opened the door to expansive police power through AI’s air superiority and proliferation of surveillance in communities of color. This development is further exacerbated by AI companies’ protectionist actions. AI rests in a protectionist ecology including, inter alia, the notion of black boxes, deep neural network learning, Section 230 of the Communications Decency Act, and partnerships with law enforcement that provide cover under the auspices of police immunity. These developments should discourage a “safe harbor” protecting tech companies from liability unless and until there is a concomitant safe harbor for Blacks and people of color to be free of the impact of harmful algorithmic spell casting. As a society, we should endeavor to protect the sovereign soul’s choice to decide which actions it will implicitly endorse with its own biometric property. Because we do not morally consent to give the right to use our biometrics to accuse, harass, or harm another in a line up, arrest, or worse, these concerns should be seen as the lawful exercise of our right to remain a conscientious objector under the First Amendment. Our biometrics should not bear false witness against our neighbors in violation of our First Amendment right to the free exercise of religious belief, sincerely held convictions, and conscientious objections thereto. Accordingly, this Article suggests a number of policy recommendations for legislative interventions that have informed the work of the author as a Commissioner on the Massachusetts Commission on Facial Recognition Technology, which has now become the framework for the recently proposed federal legislation—The Facial Recognition Technology Act of 2022. It further explores what AI reparations might fairly look like, and the collective social movements of resistance that are needed to bring about its fruition. It imagines a collective ecology of self-determination to counteract the expansive scope of AI’s protectionism, surveillance, and discrimination. This movement of self-determination seeks: (1) Black, Brown, and race-justice-conscious progressives to have majority participatory governance over all harmful tech applied disproportionately to those of us already facing both social death and contingent violence in our society by resorting to means of legislation, judicial activism, entrepreneurial influential pressure, algorithmic enforced injunctions, and community organization; (2) a prevailing reparations mindset infused in coding, staffing, governance, and antitrust accountability within all industry sectors of AI product development and services; (3) the establishment of our own counter AI tech, as well as tech, law, and social enrichment educational academies, technological knowledge exchange programs, victim compensation funds, and the establishment of our own ISPs, CDNs, cloud services, domain registrars, and social media platforms provided on our own terms to facilitate positive social change in our communities; and (4) personal daily divestment from AI companies’ ubiquitous technologies, to the extent practicable to avoid their hypnotic and addictive effects and to deny further profits to dehumanizing AI tech practices. AI requires a more just imagination. In this way, we can continue to define ourselves for ourselves and submit to an inside-out, heart-centered mindfulness perspective that informs our coding work and advocacy. Recognizing we are engaged in a battle of the mind and soul of AI, the nation, and ourselves is all the more imperative since we know that algorithms are not just programmed—they program us and the world in which we live. The need for public education, the cornerstone institution for creating an informed civil society, is now greater than ever, but it too is insidiously infected by algorithms as the digital codification of the old Jim Crow laws, promoting the same racial profiling, segregative tracking, and stigma labeling many public school students like myself had to overcome. For those of us who stand successful in defiance of these predictive algorithms, we stand simultaneously as the living embodiment of the promise inherent in all of us and the endemic fallacies of erroneous predictive code. A need thus arises for a counter-disruptive narrative in which our victory as survivors over coded inequity disrupts the false psychological narrative of technological objectivity and promise for equality.","PeriodicalId":80169,"journal":{"name":"SMU law review : a publication of Southern Methodist University School of Law","volume":"220 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SMU law review : a publication of Southern Methodist University School of Law","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.25172/smulr.75.3.7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Artificial Intelligence’s (AI) global race for comparative advantage has the world spinning, while leaving people of color and the poor rushing to reinvent AI imagination in less racist, destructive ways. In repurposing AI technology, we can look to close the national racial gaps in academic achievement, healthcare, housing, income, and fairness in the criminal justice system to conceive what AI reparations can fairly look like. AI can create a fantasy world, realizing goods we previously thought impossible. However, if AI does not close these national gaps, it no longer has foreseeable or practical social utility value compared to its foreseeable and actual grave social harm. The hypothetical promises of AI’s beneficial use as an equality machine without the requisite action and commitment to address the inequality it already causes now is fantastic propaganda masquerading as merit for a Silicon Valley that has yet to diversify its own ranks or undo the harm it is already causing. Care must be taken that fanciful imagining yields to practical realities that, in many cases, AI no longer has foreseeable practical social utility when compared to the harm it poses to democracy, privacy, equality, personhood and global warming. Until we can accept as a nation that the Sherman Antitrust Act of 1890 and the Clayton Antitrust Act of 1914 are not up to the task for breaking up tech companies; until we can acknowledge DOJ and FTC regulators are constrained from using their power because of a framework of permissibility implicit in the “consumer welfare standard” of antitrust law; until a conservative judiciary inclined to defer to that paradigm ceases its enabling of big tech, then workers, students, and all natural persons will continue to be harmed by big tech’s anticompetitive and inhumane activity. Accordingly, AI should be vigorously subject to anti-trust monopolistic protections and corporate, contractual, and tort liability explored herein, such as strict liability or a new AI prima facie tort that can pierce the corporate and technological veil of algorithmic proprietary secrecy in the interest of justice. And when appropriate, AI implementation should be phased out for a later time when we have better command and control of how to eliminate its harmful impacts that will only exacerbate existing inequities. Fourth Amendment jurisprudence of a totalitarian tenor—greatly helped by Terry v. Ohio—has opened the door to expansive police power through AI’s air superiority and proliferation of surveillance in communities of color. This development is further exacerbated by AI companies’ protectionist actions. AI rests in a protectionist ecology including, inter alia, the notion of black boxes, deep neural network learning, Section 230 of the Communications Decency Act, and partnerships with law enforcement that provide cover under the auspices of police immunity. These developments should discourage a “safe harbor” protecting tech companies from liability unless and until there is a concomitant safe harbor for Blacks and people of color to be free of the impact of harmful algorithmic spell casting. As a society, we should endeavor to protect the sovereign soul’s choice to decide which actions it will implicitly endorse with its own biometric property. Because we do not morally consent to give the right to use our biometrics to accuse, harass, or harm another in a line up, arrest, or worse, these concerns should be seen as the lawful exercise of our right to remain a conscientious objector under the First Amendment. Our biometrics should not bear false witness against our neighbors in violation of our First Amendment right to the free exercise of religious belief, sincerely held convictions, and conscientious objections thereto. Accordingly, this Article suggests a number of policy recommendations for legislative interventions that have informed the work of the author as a Commissioner on the Massachusetts Commission on Facial Recognition Technology, which has now become the framework for the recently proposed federal legislation—The Facial Recognition Technology Act of 2022. It further explores what AI reparations might fairly look like, and the collective social movements of resistance that are needed to bring about its fruition. It imagines a collective ecology of self-determination to counteract the expansive scope of AI’s protectionism, surveillance, and discrimination. This movement of self-determination seeks: (1) Black, Brown, and race-justice-conscious progressives to have majority participatory governance over all harmful tech applied disproportionately to those of us already facing both social death and contingent violence in our society by resorting to means of legislation, judicial activism, entrepreneurial influential pressure, algorithmic enforced injunctions, and community organization; (2) a prevailing reparations mindset infused in coding, staffing, governance, and antitrust accountability within all industry sectors of AI product development and services; (3) the establishment of our own counter AI tech, as well as tech, law, and social enrichment educational academies, technological knowledge exchange programs, victim compensation funds, and the establishment of our own ISPs, CDNs, cloud services, domain registrars, and social media platforms provided on our own terms to facilitate positive social change in our communities; and (4) personal daily divestment from AI companies’ ubiquitous technologies, to the extent practicable to avoid their hypnotic and addictive effects and to deny further profits to dehumanizing AI tech practices. AI requires a more just imagination. In this way, we can continue to define ourselves for ourselves and submit to an inside-out, heart-centered mindfulness perspective that informs our coding work and advocacy. Recognizing we are engaged in a battle of the mind and soul of AI, the nation, and ourselves is all the more imperative since we know that algorithms are not just programmed—they program us and the world in which we live. The need for public education, the cornerstone institution for creating an informed civil society, is now greater than ever, but it too is insidiously infected by algorithms as the digital codification of the old Jim Crow laws, promoting the same racial profiling, segregative tracking, and stigma labeling many public school students like myself had to overcome. For those of us who stand successful in defiance of these predictive algorithms, we stand simultaneously as the living embodiment of the promise inherent in all of us and the endemic fallacies of erroneous predictive code. A need thus arises for a counter-disruptive narrative in which our victory as survivors over coded inequity disrupts the false psychological narrative of technological objectivity and promise for equality.
对抗人工智能的保护主义和极权主义编码催眠:集体自决生态中人工智能赔偿和反垄断补救的案例
人工智能(AI)的全球比较优势竞赛让世界旋转,同时让有色人种和穷人争先恐后地以不那么种族主义和破坏性的方式重塑人工智能的想象力。在重新利用人工智能技术的过程中,我们可以在学术成就、医疗保健、住房、收入和刑事司法系统的公平性方面缩小国家种族差距,以设想人工智能赔偿的公平样子。人工智能可以创造一个幻想的世界,实现我们以前认为不可能实现的东西。然而,如果人工智能不能弥补这些国家差距,它就不再具有可预见的或实际的社会效用价值,而不是其可预见的和实际的严重社会危害。假设人工智能作为平等机器的有益用途,而没有必要的行动和承诺来解决它现在已经造成的不平等,这是一种奇妙的宣传,伪装成硅谷的优点,因为硅谷尚未实现自身队伍的多元化,也没有消除它已经造成的伤害。必须注意的是,幻想会变成现实,在许多情况下,人工智能与它对民主、隐私、平等、人格和全球变暖造成的伤害相比,不再具有可预见的实际社会效用。直到我们作为一个国家能够接受1890年的《谢尔曼反托拉斯法》和1914年的《克莱顿反托拉斯法》无法胜任拆分科技公司的任务;直到我们能够承认,由于反垄断法的“消费者福利标准”中隐含的许可框架,司法部和联邦贸易委员会的监管机构在使用权力方面受到限制;除非倾向于服从这一范式的保守司法机构停止对大型科技公司的支持,否则工人、学生和所有自然人将继续受到大型科技公司反竞争和不人道行为的伤害。因此,人工智能应该受到反垄断保护,以及本文探讨的公司、合同和侵权责任,例如严格责任或一种新的人工智能初步侵权行为,这种侵权行为可以为了正义的利益而穿透算法专有保密的公司和技术面纱。在适当的时候,人工智能的实施应该逐步停止,直到我们能够更好地指挥和控制如何消除它的有害影响,因为它只会加剧现有的不平等。极权主义的第四修正案判例——在特里诉俄亥俄州案中得到了极大的帮助——通过人工智能的空中优势和有色人种社区监控的扩散,为扩大警察权力打开了大门。人工智能公司的保护主义行为进一步加剧了这种发展。人工智能处于保护主义生态中,其中包括黑箱的概念、深度神经网络学习、《通信规范法》第230条,以及在警察豁免权的掩护下与执法部门的合作关系。这些发展应该阻止保护科技公司免于承担责任的“安全港”,除非有一个同时存在的安全港,让黑人和有色人种免受有害算法施法的影响。作为一个社会,我们应该努力保护主权灵魂的选择,以它自己的生物特征属性来决定哪些行为将被暗中支持。因为我们在道德上不同意授权使用我们的生物识别技术在排队、逮捕或更糟的情况下指控、骚扰或伤害他人,这些担忧应该被视为我们根据第一修正案合法行使我们作为良心拒服兵役者的权利。我们的生物识别技术不应该对我们的邻居做伪证,这违反了第一修正案赋予我们的自由行使宗教信仰、真诚信仰和良心反对宗教信仰的权利。因此,本文提出了一些立法干预的政策建议,这些建议为作者作为马萨诸塞州面部识别技术委员会委员的工作提供了信息,该委员会现已成为最近提出的联邦立法框架- 2022年面部识别技术法案。它进一步探讨了人工智能赔偿可能是什么样子,以及实现其成果所需的集体社会抵抗运动。它设想了一个自决的集体生态,以抵消人工智能的保护主义、监视和歧视的广泛范围。 这一自决运动寻求:(1)黑人、棕色人种和具有种族正义意识的进步人士,通过诉诸立法、司法行动主义、企业影响力压力、算法强制禁令和社区组织等手段,对所有有害技术进行多数参与式治理,这些技术不成比例地应用于我们这些已经面临社会死亡和偶然暴力的人;(2)在人工智能产品开发和服务的所有行业部门中,在编码、人员配备、治理和反垄断问责中注入了普遍的赔偿心态;(3)建立我们自己的反人工智能技术,以及技术、法律和社会丰富教育学院、技术知识交流计划、受害者赔偿基金,并建立我们自己的isp、cdn、云服务、域名注册商和社交媒体平台,以促进我们社区的积极社会变革;(4)个人每天从人工智能公司无处不在的技术中撤资,在可行的范围内避免它们的催眠和成瘾效应,并拒绝从非人性化的人工智能技术实践中获得进一步的利润。人工智能需要更公正的想象力。通过这种方式,我们可以继续为自己定义自己,并遵从一种由内而外、以心为中心的正念视角,为我们的编码工作和宣传提供信息。认识到我们正在进行一场关于人工智能、国家和我们自己的思想和灵魂的战斗,这是更加必要的,因为我们知道算法不仅仅是被编程的——它们为我们和我们生活的世界编程。公共教育是建立一个有知识的公民社会的基石,现在对公共教育的需求比以往任何时候都要大,但它也在不知不知地受到算法的感染,就像旧的吉姆·克劳法的数字化编纂一样,促进了种族定性、种族隔离追踪和许多像我这样的公立学校学生必须克服的耻辱标签。对于我们这些成功地蔑视这些预测算法的人来说,我们同时是我们所有人固有的承诺和错误预测代码的地方性谬误的活生生的体现。因此,我们需要一种反破坏性的叙事,在这种叙事中,我们作为幸存者战胜编码不平等的胜利,打破了关于技术客观性和平等承诺的虚假心理叙事。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信