辨别人工智能治理的“简单”和“困难”问题

Matti Minkkinen;Matti Mäntymäki
{"title":"辨别人工智能治理的“简单”和“困难”问题","authors":"Matti Minkkinen;Matti Mäntymäki","doi":"10.1109/TTS.2023.3267382","DOIUrl":null,"url":null,"abstract":"While there is widespread consensus that artificial intelligence (AI) needs to be governed owing to its rapid diffusion and societal implications, the current scholarly discussion on AI governance is dispersed across numerous disciplines and problem domains. This paper clarifies the situation by discerning two problem areas, metaphorically titled the “easy” and “hard” problems of AI governance, using a dialectic theory synthesis approach. The “easy problem” of AI governance concerns how organizations’ design, development, and use of AI systems align with laws, values, and norms stemming from legislation, ethics guidelines, and the surrounding society. Organizations can provisionally solve the “easy problem” by implementing appropriate organizational mechanisms to govern data, algorithms, and algorithmic systems. The “hard problem” of AI governance concerns AI as a general-purpose technology that transforms organizations and societies. Rather than a matter to be resolved, the “hard problem” is a sensemaking process regarding socio-technical change. Partial solutions to the “hard problem” may open unforeseen issues. While societies should not lose track of the “hard problem” of AI governance, there is significant value in solving the “easy problem” for two reasons. First, the “easy problem” can be provisionally solved by tackling bias, harm, and transparency issues. Second, solving the “easy problem” helps solve the “hard problem,” as responsible organizational AI practices create virtuous rather than vicious cycles.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 2","pages":"188-194"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8566059/10153436/10103193.pdf","citationCount":"1","resultStr":"{\"title\":\"Discerning Between the “Easy” and “Hard” Problems of AI Governance\",\"authors\":\"Matti Minkkinen;Matti Mäntymäki\",\"doi\":\"10.1109/TTS.2023.3267382\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"While there is widespread consensus that artificial intelligence (AI) needs to be governed owing to its rapid diffusion and societal implications, the current scholarly discussion on AI governance is dispersed across numerous disciplines and problem domains. This paper clarifies the situation by discerning two problem areas, metaphorically titled the “easy” and “hard” problems of AI governance, using a dialectic theory synthesis approach. The “easy problem” of AI governance concerns how organizations’ design, development, and use of AI systems align with laws, values, and norms stemming from legislation, ethics guidelines, and the surrounding society. Organizations can provisionally solve the “easy problem” by implementing appropriate organizational mechanisms to govern data, algorithms, and algorithmic systems. The “hard problem” of AI governance concerns AI as a general-purpose technology that transforms organizations and societies. Rather than a matter to be resolved, the “hard problem” is a sensemaking process regarding socio-technical change. Partial solutions to the “hard problem” may open unforeseen issues. While societies should not lose track of the “hard problem” of AI governance, there is significant value in solving the “easy problem” for two reasons. First, the “easy problem” can be provisionally solved by tackling bias, harm, and transparency issues. Second, solving the “easy problem” helps solve the “hard problem,” as responsible organizational AI practices create virtuous rather than vicious cycles.\",\"PeriodicalId\":73324,\"journal\":{\"name\":\"IEEE transactions on technology and society\",\"volume\":\"4 2\",\"pages\":\"188-194\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/iel7/8566059/10153436/10103193.pdf\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on technology and society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10103193/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on technology and society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10103193/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

由于人工智能(AI)的快速传播和社会影响,人们普遍认为需要对其进行治理,但目前关于人工智能治理的学术讨论分散在许多学科和问题领域。本文通过辨析人工智能治理的两个问题领域,隐喻地称为人工智能治理的“容易”和“困难”问题,使用辩证理论综合方法来澄清这种情况。人工智能治理的“简单问题”涉及组织如何设计、开发和使用人工智能系统,以符合法律、价值观和源于立法、道德准则和周围社会的规范。组织可以通过实现适当的组织机制来管理数据、算法和算法系统,暂时解决“容易的问题”。人工智能治理的“难题”是将人工智能作为一种改变组织和社会的通用技术。“难题”不是一个需要解决的问题,而是一个关于社会技术变革的意义建构过程。对“难题”的部分解决方案可能会带来无法预见的问题。虽然社会不应忽视人工智能治理的“难题”,但解决“简单问题”具有重大价值,原因有二。首先,“容易的问题”可以通过解决偏见、伤害和透明度问题来暂时解决。其次,解决“容易的问题”有助于解决“困难的问题”,因为负责任的组织人工智能实践创造了良性循环,而不是恶性循环。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Discerning Between the “Easy” and “Hard” Problems of AI Governance
While there is widespread consensus that artificial intelligence (AI) needs to be governed owing to its rapid diffusion and societal implications, the current scholarly discussion on AI governance is dispersed across numerous disciplines and problem domains. This paper clarifies the situation by discerning two problem areas, metaphorically titled the “easy” and “hard” problems of AI governance, using a dialectic theory synthesis approach. The “easy problem” of AI governance concerns how organizations’ design, development, and use of AI systems align with laws, values, and norms stemming from legislation, ethics guidelines, and the surrounding society. Organizations can provisionally solve the “easy problem” by implementing appropriate organizational mechanisms to govern data, algorithms, and algorithmic systems. The “hard problem” of AI governance concerns AI as a general-purpose technology that transforms organizations and societies. Rather than a matter to be resolved, the “hard problem” is a sensemaking process regarding socio-technical change. Partial solutions to the “hard problem” may open unforeseen issues. While societies should not lose track of the “hard problem” of AI governance, there is significant value in solving the “easy problem” for two reasons. First, the “easy problem” can be provisionally solved by tackling bias, harm, and transparency issues. Second, solving the “easy problem” helps solve the “hard problem,” as responsible organizational AI practices create virtuous rather than vicious cycles.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信