伤害之后:算法失败后的道德修复请求。

IF 3 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY
Pak-Hang Wong, Gernot Rieder
{"title":"伤害之后:算法失败后的道德修复请求。","authors":"Pak-Hang Wong, Gernot Rieder","doi":"10.1007/s11948-025-00555-y","DOIUrl":null,"url":null,"abstract":"<p><p>In response to growing concerns over the societal impacts of AI and algorithmic decision-making, current scholarly and legal efforts have mainly focused on identifying risks and implementing safeguards against harmful consequences, with regulations seeking to ensure that systems are secure, trustworthy, and ethical. This preventative approach, however, rests on the assumption that algorithmic harm can essentially be avoided by specifying rules and requirements that protect against potential dangers. Consequently, comparatively little attention has been given to post-harm scenarios, i.e. cases and situations where individuals have already been harmed by an algorithmic system. We contend that this inattention to the aftermath of harm constitutes a major blind spot in AI ethics and governance and propose the notion of algorithmic imprint as a sensitizing concept for understanding both the nature and potential longer-term effects of algorithmic harm. Arguing that neither the decommissioning of harmful systems nor the reversal of damaging decisions is sufficient to fully address these effects, we suggest that a more comprehensive response to algorithmic harm requires engaging in discussions on moral repair, offering directions on what such a plea for moral repair ultimately entails.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 5","pages":"26"},"PeriodicalIF":3.0000,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12446399/pdf/","citationCount":"0","resultStr":"{\"title\":\"After Harm: A Plea for Moral Repair after Algorithms Have Failed.\",\"authors\":\"Pak-Hang Wong, Gernot Rieder\",\"doi\":\"10.1007/s11948-025-00555-y\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>In response to growing concerns over the societal impacts of AI and algorithmic decision-making, current scholarly and legal efforts have mainly focused on identifying risks and implementing safeguards against harmful consequences, with regulations seeking to ensure that systems are secure, trustworthy, and ethical. This preventative approach, however, rests on the assumption that algorithmic harm can essentially be avoided by specifying rules and requirements that protect against potential dangers. Consequently, comparatively little attention has been given to post-harm scenarios, i.e. cases and situations where individuals have already been harmed by an algorithmic system. We contend that this inattention to the aftermath of harm constitutes a major blind spot in AI ethics and governance and propose the notion of algorithmic imprint as a sensitizing concept for understanding both the nature and potential longer-term effects of algorithmic harm. Arguing that neither the decommissioning of harmful systems nor the reversal of damaging decisions is sufficient to fully address these effects, we suggest that a more comprehensive response to algorithmic harm requires engaging in discussions on moral repair, offering directions on what such a plea for moral repair ultimately entails.</p>\",\"PeriodicalId\":49564,\"journal\":{\"name\":\"Science and Engineering Ethics\",\"volume\":\"31 5\",\"pages\":\"26\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2025-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12446399/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Science and Engineering Ethics\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1007/s11948-025-00555-y\",\"RegionNum\":2,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Science and Engineering Ethics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1007/s11948-025-00555-y","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

为了应对对人工智能和算法决策的社会影响日益增长的担忧,目前的学术和法律努力主要集中在识别风险和实施防止有害后果的保障措施上,并制定法规以确保系统安全、值得信赖和合乎道德。然而,这种预防性方法是建立在这样一种假设之上的,即通过规定防止潜在危险的规则和要求,基本上可以避免算法造成的伤害。因此,对伤害后情景的关注相对较少,即个人已经受到算法系统伤害的案例和情况。我们认为,这种对伤害后果的忽视构成了人工智能伦理和治理的主要盲点,并提出了算法印记的概念,作为理解算法伤害的性质和潜在长期影响的敏感概念。我们认为,有害系统的退役和破坏性决策的逆转都不足以完全解决这些影响,因此我们建议,对算法伤害的更全面回应需要参与道德修复的讨论,为这种道德修复的请求最终需要什么提供方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
After Harm: A Plea for Moral Repair after Algorithms Have Failed.

In response to growing concerns over the societal impacts of AI and algorithmic decision-making, current scholarly and legal efforts have mainly focused on identifying risks and implementing safeguards against harmful consequences, with regulations seeking to ensure that systems are secure, trustworthy, and ethical. This preventative approach, however, rests on the assumption that algorithmic harm can essentially be avoided by specifying rules and requirements that protect against potential dangers. Consequently, comparatively little attention has been given to post-harm scenarios, i.e. cases and situations where individuals have already been harmed by an algorithmic system. We contend that this inattention to the aftermath of harm constitutes a major blind spot in AI ethics and governance and propose the notion of algorithmic imprint as a sensitizing concept for understanding both the nature and potential longer-term effects of algorithmic harm. Arguing that neither the decommissioning of harmful systems nor the reversal of damaging decisions is sufficient to fully address these effects, we suggest that a more comprehensive response to algorithmic harm requires engaging in discussions on moral repair, offering directions on what such a plea for moral repair ultimately entails.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Science and Engineering Ethics
Science and Engineering Ethics 综合性期刊-工程:综合
CiteScore
10.70
自引率
5.40%
发文量
54
审稿时长
>12 weeks
期刊介绍: Science and Engineering Ethics is an international multidisciplinary journal dedicated to exploring ethical issues associated with science and engineering, covering professional education, research and practice as well as the effects of technological innovations and research findings on society. While the focus of this journal is on science and engineering, contributions from a broad range of disciplines, including social sciences and humanities, are welcomed. Areas of interest include, but are not limited to, ethics of new and emerging technologies, research ethics, computer ethics, energy ethics, animals and human subjects ethics, ethics education in science and engineering, ethics in design, biomedical ethics, values in technology and innovation. We welcome contributions that deal with these issues from an international perspective, particularly from countries that are underrepresented in these discussions.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信