Research in AI has Implications for Society: How do we Respond?

H. Sætra, E. Fosch-Villaronga
{"title":"Research in AI has Implications for Society: How do we Respond?","authors":"H. Sætra, E. Fosch-Villaronga","doi":"10.5771/2747-5182-2021-1-62","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) offers previously unimaginable possibilities, solving problems faster and more creatively than before, representing and inviting hope and change, but also fear and resistance. Unfortunately, while the pace of technology development and application dramatically accelerates, the understanding of its implications does not follow suit. Moreover, while mechanisms to anticipate, control, and steer AI development to prevent adverse consequences seem necessary, the current power dynamics on which society should frame such development is causing much confusion. In this article we ask whether AI advances should be restricted, modified, or adjusted based on their potential legal, ethical, societal consequences. We examine four possible arguments in favor of subjecting scientific activity to stricter ethical and political control and critically analyze them in light of the perspective that science, ethics, and politics should strive for a division of labor and balance of power rather than a conflation. We argue that the domains of science, ethics, and politics should not conflate if we are to retain the ability to adequately assess the adequate course of action in light of AI‘s implications. We do so because such conflation could lead to uncertain and questionable outcomes, such as politicized science or ethics washing, ethics constrained by corporate or scientific interests, insufficient regulation, and political activity due to a misplaced belief in industry self-regulation. As such, we argue that the different functions of science, ethics, and politics must be respected to ensure AI development serves the interests of society.","PeriodicalId":105767,"journal":{"name":"Morals & Machines","volume":"90 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Morals & Machines","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5771/2747-5182-2021-1-62","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

Abstract

Artificial intelligence (AI) offers previously unimaginable possibilities, solving problems faster and more creatively than before, representing and inviting hope and change, but also fear and resistance. Unfortunately, while the pace of technology development and application dramatically accelerates, the understanding of its implications does not follow suit. Moreover, while mechanisms to anticipate, control, and steer AI development to prevent adverse consequences seem necessary, the current power dynamics on which society should frame such development is causing much confusion. In this article we ask whether AI advances should be restricted, modified, or adjusted based on their potential legal, ethical, societal consequences. We examine four possible arguments in favor of subjecting scientific activity to stricter ethical and political control and critically analyze them in light of the perspective that science, ethics, and politics should strive for a division of labor and balance of power rather than a conflation. We argue that the domains of science, ethics, and politics should not conflate if we are to retain the ability to adequately assess the adequate course of action in light of AI‘s implications. We do so because such conflation could lead to uncertain and questionable outcomes, such as politicized science or ethics washing, ethics constrained by corporate or scientific interests, insufficient regulation, and political activity due to a misplaced belief in industry self-regulation. As such, we argue that the different functions of science, ethics, and politics must be respected to ensure AI development serves the interests of society.
人工智能研究对社会的影响:我们如何应对?
人工智能(AI)提供了以前难以想象的可能性,比以前更快、更有创造性地解决问题,代表并引发了希望和变革,但也带来了恐惧和抵制。不幸的是,虽然技术开发和应用的步伐急剧加快,但对其含义的理解却没有跟上。此外,虽然预测、控制和引导人工智能发展以防止不良后果的机制似乎是必要的,但目前社会应该在此基础上构建这种发展的权力动态正在造成许多混乱。在这篇文章中,我们将探讨人工智能的进步是否应该根据其潜在的法律、伦理和社会后果来限制、修改或调整。我们研究了支持将科学活动置于更严格的伦理和政治控制之下的四种可能的论点,并根据科学、伦理和政治应该努力实现劳动分工和权力平衡而不是合并的观点,对它们进行了批判性分析。我们认为,如果我们要保留根据人工智能的影响充分评估适当行动方案的能力,就不应该将科学、伦理和政治领域混为一谈。我们这样做是因为这样的合并可能会导致不确定和可疑的结果,例如政治化的科学或道德清洗,受到企业或科学利益约束的道德,监管不足,以及由于对行业自我监管的错误信念而导致的政治活动。因此,我们认为必须尊重科学、伦理和政治的不同功能,以确保人工智能的发展符合社会利益。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信