透过玻璃,黑暗:人工智能和不透明度问题

S. Chesterman
{"title":"透过玻璃,黑暗:人工智能和不透明度问题","authors":"S. Chesterman","doi":"10.2139/ssrn.3575534","DOIUrl":null,"url":null,"abstract":"\n As computer programs become more complex, the ability of non-specialists to understand how a given output has been reached diminishes. Opaqueness may also be built into programs to protect proprietary interests. Both types of systems are capable of being explained, either through recourse to experts or an order to produce information. Another class of system may be naturally opaque, however, using deep learning methods that are impossible to explain in a manner that humans can comprehend. An emerging literature describes these phenomena or specific problems to which they give rise, notably the potential for bias against specific groups. Drawing on examples from the United States, the European Union, and China, this Article develops a novel typology of three discrete regulatory challenges posed by opacity. First, it may encourage—or fail to discourage—inferior decisions by removing the potential for oversight and accountability. Second, it may allow impermissible decisions, notably those that explicitly or implicitly rely on protected categories such as gender or race in making a determination. Third, it may render illegitimate decisions in which the process by which an answer is reached is as important as the answer itself. The means of addressing some or all of these concerns is routinely said to be through transparency. Yet, while proprietary opacity can be dealt with by court order and complex opacity through recourse to experts, naturally opaque systems may require novel forms of “explanation” or an acceptance that some machine-made decisions cannot be explained—or, in the alternative, that some decisions should not be made by machine at all.","PeriodicalId":13594,"journal":{"name":"Information Systems & Economics eJournal","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Through a Glass, Darkly: Artificial Intelligence and the Problem of Opacity\",\"authors\":\"S. Chesterman\",\"doi\":\"10.2139/ssrn.3575534\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n As computer programs become more complex, the ability of non-specialists to understand how a given output has been reached diminishes. Opaqueness may also be built into programs to protect proprietary interests. Both types of systems are capable of being explained, either through recourse to experts or an order to produce information. Another class of system may be naturally opaque, however, using deep learning methods that are impossible to explain in a manner that humans can comprehend. An emerging literature describes these phenomena or specific problems to which they give rise, notably the potential for bias against specific groups. Drawing on examples from the United States, the European Union, and China, this Article develops a novel typology of three discrete regulatory challenges posed by opacity. First, it may encourage—or fail to discourage—inferior decisions by removing the potential for oversight and accountability. Second, it may allow impermissible decisions, notably those that explicitly or implicitly rely on protected categories such as gender or race in making a determination. Third, it may render illegitimate decisions in which the process by which an answer is reached is as important as the answer itself. The means of addressing some or all of these concerns is routinely said to be through transparency. Yet, while proprietary opacity can be dealt with by court order and complex opacity through recourse to experts, naturally opaque systems may require novel forms of “explanation” or an acceptance that some machine-made decisions cannot be explained—or, in the alternative, that some decisions should not be made by machine at all.\",\"PeriodicalId\":13594,\"journal\":{\"name\":\"Information Systems & Economics eJournal\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Systems & Economics eJournal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2139/ssrn.3575534\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Systems & Economics eJournal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3575534","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

随着计算机程序变得越来越复杂,非专业人员理解给定输出是如何达到的能力也在下降。不透明性也可能被嵌入到保护私有利益的程序中。这两种类型的系统都可以通过求助于专家或命令生成信息来解释。然而,另一类系统可能天生不透明,它们使用的深度学习方法无法用人类能够理解的方式来解释。新兴文献描述了这些现象或它们引起的具体问题,特别是对特定群体的偏见。本文以美国、欧盟和中国为例,对不透明带来的三种不同的监管挑战提出了一种新的类型。首先,通过消除监督和问责的可能性,它可能会鼓励——或者无法阻止——低劣的决策。其次,它可能允许做出不被允许的决定,特别是那些在做出决定时明确或含蓄地依赖于受保护类别(如性别或种族)的决定。第三,它可能导致不合理的决定,其中得出答案的过程与答案本身同样重要。解决部分或全部这些问题的方法通常被认为是通过透明度。然而,虽然专有的不透明可以通过法院命令来解决,复杂的不透明可以通过求助于专家来解决,但自然不透明的系统可能需要新的“解释”形式,或者接受一些机器做出的决定无法解释——或者,另一种选择是,一些决定根本不应该由机器做出。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Through a Glass, Darkly: Artificial Intelligence and the Problem of Opacity
As computer programs become more complex, the ability of non-specialists to understand how a given output has been reached diminishes. Opaqueness may also be built into programs to protect proprietary interests. Both types of systems are capable of being explained, either through recourse to experts or an order to produce information. Another class of system may be naturally opaque, however, using deep learning methods that are impossible to explain in a manner that humans can comprehend. An emerging literature describes these phenomena or specific problems to which they give rise, notably the potential for bias against specific groups. Drawing on examples from the United States, the European Union, and China, this Article develops a novel typology of three discrete regulatory challenges posed by opacity. First, it may encourage—or fail to discourage—inferior decisions by removing the potential for oversight and accountability. Second, it may allow impermissible decisions, notably those that explicitly or implicitly rely on protected categories such as gender or race in making a determination. Third, it may render illegitimate decisions in which the process by which an answer is reached is as important as the answer itself. The means of addressing some or all of these concerns is routinely said to be through transparency. Yet, while proprietary opacity can be dealt with by court order and complex opacity through recourse to experts, naturally opaque systems may require novel forms of “explanation” or an acceptance that some machine-made decisions cannot be explained—or, in the alternative, that some decisions should not be made by machine at all.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信