Re-enacting machine learning practices to enquire into the moral issues they pose

IF 2.4 2区 文学 Q1 COMMUNICATION
Jean-Marie John-Mathews, Robin De Mourat, Donato Ricci, M. Crépel
{"title":"Re-enacting machine learning practices to enquire into the moral issues they pose","authors":"Jean-Marie John-Mathews, Robin De Mourat, Donato Ricci, M. Crépel","doi":"10.1177/13548565231174584","DOIUrl":null,"url":null,"abstract":"As the number of ethical incidents associated with Machine Learning (ML) algorithms increases worldwide, many actors are seeking to produce technical and legal tools to regulate the professional practices associated with these technologies. However these tools, generally grounded either on lofty principles or on technical approaches, often fail at addressing the complexity of the moral issues that ML-based systems are triggering. They are mostly based on a ‘principled’ conception of morality where technical practices cannot be seen as more than mere means to be put at the service of more valuable moral ends. We argue that it is necessary to localise ethical debates within the complex entanglement of technical, legal and organisational entities from which ML moral issues stem. To expand the repertoire of the approaches through which these issues might be addressed, we designed and tested an interview protocol based on the re-enactment of data scientists’ daily ML practices. We asked them to recall and describe the crafting and choosing of algorithms. Then, our protocol added two reflexivity-fostering elements to the situation: technical tools to assess algorithms’ morality, based on incorporated ‘ethicality’ indicators; and a series of staged objections to the aforementioned technical solutions to ML moral issues, made by factitious actors inspired by the data scientists’ daily environment. We used this protocol to observe how ML data scientists uncover associations with multiple entities, to address moral issues from within the course of their technical practices. We thus reframe ML morality as an inquiry into the uncertain options that practitioners face in the heat of technical activities. We propose to institute moral enquiries both as a descriptive method serving to delineate alternative depictions of ML algorithms when they are affected by moral issues and as a transformative method to propagate situated critical technical practices within ML-building professional environments.","PeriodicalId":47242,"journal":{"name":"Convergence-The International Journal of Research Into New Media Technologies","volume":null,"pages":null},"PeriodicalIF":2.4000,"publicationDate":"2023-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Convergence-The International Journal of Research Into New Media Technologies","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1177/13548565231174584","RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMMUNICATION","Score":null,"Total":0}
引用次数: 2

Abstract

As the number of ethical incidents associated with Machine Learning (ML) algorithms increases worldwide, many actors are seeking to produce technical and legal tools to regulate the professional practices associated with these technologies. However these tools, generally grounded either on lofty principles or on technical approaches, often fail at addressing the complexity of the moral issues that ML-based systems are triggering. They are mostly based on a ‘principled’ conception of morality where technical practices cannot be seen as more than mere means to be put at the service of more valuable moral ends. We argue that it is necessary to localise ethical debates within the complex entanglement of technical, legal and organisational entities from which ML moral issues stem. To expand the repertoire of the approaches through which these issues might be addressed, we designed and tested an interview protocol based on the re-enactment of data scientists’ daily ML practices. We asked them to recall and describe the crafting and choosing of algorithms. Then, our protocol added two reflexivity-fostering elements to the situation: technical tools to assess algorithms’ morality, based on incorporated ‘ethicality’ indicators; and a series of staged objections to the aforementioned technical solutions to ML moral issues, made by factitious actors inspired by the data scientists’ daily environment. We used this protocol to observe how ML data scientists uncover associations with multiple entities, to address moral issues from within the course of their technical practices. We thus reframe ML morality as an inquiry into the uncertain options that practitioners face in the heat of technical activities. We propose to institute moral enquiries both as a descriptive method serving to delineate alternative depictions of ML algorithms when they are affected by moral issues and as a transformative method to propagate situated critical technical practices within ML-building professional environments.
重现机器学习实践,探究它们带来的道德问题
随着全球范围内与机器学习(ML)算法相关的道德事件数量的增加,许多参与者正在寻求开发技术和法律工具来规范与这些技术相关的专业实践。然而,这些工具通常基于崇高的原则或技术方法,往往无法解决基于ml的系统引发的道德问题的复杂性。它们大多基于一种“原则性”的道德观,在这种道德观中,技术实践不能被视为仅仅是为更有价值的道德目的服务的手段。我们认为,有必要将伦理辩论定位在ML道德问题产生的技术、法律和组织实体的复杂纠缠中。为了扩大解决这些问题的方法,我们设计并测试了一个基于数据科学家日常ML实践的访谈协议。我们要求他们回忆和描述算法的制作和选择。然后,我们的协议在这种情况下增加了两个促进反身性的因素:基于合并的“道德”指标评估算法道德的技术工具;以及一系列针对上述ML道德问题的技术解决方案的反对意见,这些反对意见是由受数据科学家日常环境启发的人为演员提出的。我们使用该协议来观察ML数据科学家如何发现与多个实体的关联,以解决其技术实践过程中的道德问题。因此,我们将ML道德重新定义为对从业者在技术活动中面临的不确定选择的调查。我们建议将道德调查作为一种描述性方法,用于描述受道德问题影响的ML算法的替代描述,并作为一种在ML构建专业环境中传播关键技术实践的变革性方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
5.80
自引率
7.10%
发文量
98
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信