An Ecosystem Approach to Ethical AI and Data Use: Experimental Reflections

M. Findlay, J. Seah
{"title":"An Ecosystem Approach to Ethical AI and Data Use: Experimental Reflections","authors":"M. Findlay, J. Seah","doi":"10.1109/AI4G50087.2020.9311069","DOIUrl":null,"url":null,"abstract":"While we have witnessed a rapid growth of ethics documents meant to guide artificial intelligence (AI) development, the promotion of AI ethics has nonetheless proceeded with little input from AI practitioners themselves. Given the proliferation of AI for Social Good initiatives, this is an emerging gap that needs to be addressed in order to develop more meaningful ethical approaches to AI use and development. This paper offers a methodology-a ‘shared fairness' approach-aimed at identifying AI practitioners' needs when it comes to confronting and resolving ethical challenges and to find a third space where their operational language can be married with that of the more abstract principles that presently remain at the periphery of their work experiences. We offer a grassroots approach to operational ethics based on dialog and mutualised responsibility: this methodology is centred around conversations intended to elicit practitioners perceived ethical attribution and distribution over key value-laden operational decisions, to identify when these decisions arise and what ethical challenges they confront, and to engage in a language of ethics and responsibility which enables practitioners to internalise ethical responsibility. The methodology bridges responsibility imbalances that rest in structural decision-making power and elite technical knowledge, by commencing with personal, facilitated conversations, returning the ethical discourse to those meant to give it meaning at the sharp end of the ecosystem. Our primary contribution is to add to the recent literature seeking to bring AI practitioners' experiences to the fore by offering a methodology for understanding how ethics manifests as a relational and interdependent sociotechnical practice in their work.","PeriodicalId":286271,"journal":{"name":"2020 IEEE / ITU International Conference on Artificial Intelligence for Good (AI4G)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE / ITU International Conference on Artificial Intelligence for Good (AI4G)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AI4G50087.2020.9311069","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

While we have witnessed a rapid growth of ethics documents meant to guide artificial intelligence (AI) development, the promotion of AI ethics has nonetheless proceeded with little input from AI practitioners themselves. Given the proliferation of AI for Social Good initiatives, this is an emerging gap that needs to be addressed in order to develop more meaningful ethical approaches to AI use and development. This paper offers a methodology-a ‘shared fairness' approach-aimed at identifying AI practitioners' needs when it comes to confronting and resolving ethical challenges and to find a third space where their operational language can be married with that of the more abstract principles that presently remain at the periphery of their work experiences. We offer a grassroots approach to operational ethics based on dialog and mutualised responsibility: this methodology is centred around conversations intended to elicit practitioners perceived ethical attribution and distribution over key value-laden operational decisions, to identify when these decisions arise and what ethical challenges they confront, and to engage in a language of ethics and responsibility which enables practitioners to internalise ethical responsibility. The methodology bridges responsibility imbalances that rest in structural decision-making power and elite technical knowledge, by commencing with personal, facilitated conversations, returning the ethical discourse to those meant to give it meaning at the sharp end of the ecosystem. Our primary contribution is to add to the recent literature seeking to bring AI practitioners' experiences to the fore by offering a methodology for understanding how ethics manifests as a relational and interdependent sociotechnical practice in their work.
伦理人工智能和数据使用的生态系统方法:实验反思
虽然我们目睹了旨在指导人工智能(AI)发展的伦理文件的快速增长,但人工智能伦理的推广却在人工智能从业者自己的投入很少的情况下进行。鉴于人工智能社会公益倡议的激增,这是一个需要解决的新差距,以便为人工智能的使用和发展制定更有意义的道德方法。本文提供了一种方法——一种“共享公平”方法——旨在确定人工智能从业者在面对和解决道德挑战时的需求,并找到第三个空间,使他们的操作语言可以与目前仍处于其工作经验边缘的更抽象的原则相结合。我们提供了一种基于对话和共同责任的基层操作伦理方法:这种方法以对话为中心,旨在引导从业者感知道德归因和分配关键价值承载的操作决策,确定这些决策何时产生以及他们面临的道德挑战,并参与道德和责任语言,使从业者能够内化道德责任。该方法通过从个人的、便利的对话开始,将伦理话语返回给那些在生态系统的尖端赋予其意义的人,从而弥合了结构性决策权力和精英技术知识中的责任不平衡。我们的主要贡献是增加最近的文献,通过提供一种方法来理解伦理如何在他们的工作中表现为关系和相互依存的社会技术实践,从而寻求将人工智能从业者的经验带到前台。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信