致力于负责任人工智能的公司:从原则到实施和监管?

Philosophy & Technology Pub Date : 2021-01-01 Epub Date: 2021-10-06 DOI:10.1007/s13347-021-00474-3
Paul B de Laat
{"title":"致力于负责任人工智能的公司:从原则到实施和监管?","authors":"Paul B de Laat","doi":"10.1007/s13347-021-00474-3","DOIUrl":null,"url":null,"abstract":"<p><p>The term 'responsible AI' has been coined to denote AI that is fair and non-biased, transparent and explainable, secure and safe, privacy-proof, accountable, and to the benefit of mankind. Since 2016, a great many organizations have pledged allegiance to such principles. Amongst them are 24 AI companies that did so by posting a commitment of the kind on their website and/or by joining the 'Partnership on AI'. By means of a comprehensive web search, two questions are addressed by this study: (1) Did the signatory companies actually try to implement these principles in practice, and if so, how? (2) What are their views on the role of other societal actors in steering AI towards the stated principles (the issue of regulation)? It is concluded that some three of the largest amongst them have carried out valuable steps towards implementation, in particular by developing and open sourcing new software tools. To them, charges of mere 'ethics washing' do not apply. Moreover, some 10 companies from both the USA and Europe have publicly endorsed the position that apart from self-regulation, AI is in urgent need of governmental regulation. They mostly advocate focussing regulation on high-risk applications of AI, a policy which to them represents the sensible middle course between laissez-faire on the one hand and outright bans on technologies on the other. The future shaping of standards, ethical codes, and laws as a result of these regulatory efforts remains, of course, to be determined.</p>","PeriodicalId":513391,"journal":{"name":"Philosophy & Technology","volume":"34 4","pages":"1135-1193"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8492454/pdf/","citationCount":"23","resultStr":"{\"title\":\"Companies Committed to Responsible AI: From Principles towards Implementation and Regulation?\",\"authors\":\"Paul B de Laat\",\"doi\":\"10.1007/s13347-021-00474-3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The term 'responsible AI' has been coined to denote AI that is fair and non-biased, transparent and explainable, secure and safe, privacy-proof, accountable, and to the benefit of mankind. Since 2016, a great many organizations have pledged allegiance to such principles. Amongst them are 24 AI companies that did so by posting a commitment of the kind on their website and/or by joining the 'Partnership on AI'. By means of a comprehensive web search, two questions are addressed by this study: (1) Did the signatory companies actually try to implement these principles in practice, and if so, how? (2) What are their views on the role of other societal actors in steering AI towards the stated principles (the issue of regulation)? It is concluded that some three of the largest amongst them have carried out valuable steps towards implementation, in particular by developing and open sourcing new software tools. To them, charges of mere 'ethics washing' do not apply. Moreover, some 10 companies from both the USA and Europe have publicly endorsed the position that apart from self-regulation, AI is in urgent need of governmental regulation. They mostly advocate focussing regulation on high-risk applications of AI, a policy which to them represents the sensible middle course between laissez-faire on the one hand and outright bans on technologies on the other. The future shaping of standards, ethical codes, and laws as a result of these regulatory efforts remains, of course, to be determined.</p>\",\"PeriodicalId\":513391,\"journal\":{\"name\":\"Philosophy & Technology\",\"volume\":\"34 4\",\"pages\":\"1135-1193\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8492454/pdf/\",\"citationCount\":\"23\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Philosophy & Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s13347-021-00474-3\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2021/10/6 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Philosophy & Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s13347-021-00474-3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2021/10/6 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 23

摘要

“负责任的人工智能”一词被创造出来,指的是公平和无偏见、透明和可解释、有保障和安全、不受隐私保护、负责任和造福人类的人工智能。自2016年以来,许多组织都承诺遵守这些原则。其中有24家人工智能公司通过在其网站上发布此类承诺和/或加入“人工智能合作伙伴关系”来实现这一目标。通过全面的网络搜索,本研究解决了两个问题:(1)签署公司实际上是否试图在实践中实施这些原则,如果是,如何实施?(2)他们对其他社会参与者在引导人工智能走向既定原则(监管问题)方面的作用有何看法?得出的结论是,其中最大的三家公司已经在实施方面采取了有价值的步骤,特别是通过开发和开源新的软件工具。对他们来说,仅仅是“道德清洗”的指控并不适用。此外,来自美国和欧洲的约10家公司公开表示,除了自我监管外,人工智能迫切需要政府监管。他们大多主张将监管重点放在人工智能的高风险应用上,对他们来说,这一政策是介于自由放任和彻底禁止技术之间的明智中间路线。当然,作为这些监管努力的结果,未来标准、道德准则和法律的形成仍有待确定。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Companies Committed to Responsible AI: From Principles towards Implementation and Regulation?

Companies Committed to Responsible AI: From Principles towards Implementation and Regulation?

Companies Committed to Responsible AI: From Principles towards Implementation and Regulation?

Companies Committed to Responsible AI: From Principles towards Implementation and Regulation?

The term 'responsible AI' has been coined to denote AI that is fair and non-biased, transparent and explainable, secure and safe, privacy-proof, accountable, and to the benefit of mankind. Since 2016, a great many organizations have pledged allegiance to such principles. Amongst them are 24 AI companies that did so by posting a commitment of the kind on their website and/or by joining the 'Partnership on AI'. By means of a comprehensive web search, two questions are addressed by this study: (1) Did the signatory companies actually try to implement these principles in practice, and if so, how? (2) What are their views on the role of other societal actors in steering AI towards the stated principles (the issue of regulation)? It is concluded that some three of the largest amongst them have carried out valuable steps towards implementation, in particular by developing and open sourcing new software tools. To them, charges of mere 'ethics washing' do not apply. Moreover, some 10 companies from both the USA and Europe have publicly endorsed the position that apart from self-regulation, AI is in urgent need of governmental regulation. They mostly advocate focussing regulation on high-risk applications of AI, a policy which to them represents the sensible middle course between laissez-faire on the one hand and outright bans on technologies on the other. The future shaping of standards, ethical codes, and laws as a result of these regulatory efforts remains, of course, to be determined.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信