循环中的失败:人工智能决策中的人类领导力

Katina Michael;Jordan Richard Schoenherr;Kathleen M. Vogel
{"title":"循环中的失败:人工智能决策中的人类领导力","authors":"Katina Michael;Jordan Richard Schoenherr;Kathleen M. Vogel","doi":"10.1109/TTS.2024.3378587","DOIUrl":null,"url":null,"abstract":"The dark side of AI has been a persistent focus in discussions of popular science and academia (Appendix A), with some claiming that AI is “evil” \n<xref>[1]</xref>\n. Many commentators make compelling arguments for their concerns. Techno-elites have also contributed to the polarization of these discussions, with ultimatums that in this new era of industrialized AI, citizens will need to “[join] with the AI or risk being left behind” \n<xref>[2]</xref>\n. With such polarizing language, debates about AI adoption run the risk of being oversimplified. Discussion of technological trust frequently takes an \n<italic>all-or-nothing</i>\n approach. All technologies – cognitive, social, material, or digital – introduce tradeoffs when they are adopted, and contain both ‘light and dark’ features \n<xref>[3]</xref>\n. But descriptions of these features can take on deceptively (or unintentionally) anthropomorphic tones, especially when stakeholders refer to the features as ‘agents’ \n<xref>[4]</xref>\n, \n<xref>[5]</xref>\n. When used as an analogical heuristic, this can inform the design of AI, provide knowledge for AI operations, and potentially even predict its outcomes \n<xref>[6]</xref>\n. However, if AI agency is accepted at face value, we run the risk of having unrealistic expectations for the capabilities of these systems.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 1","pages":"2-13"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10539317","citationCount":"0","resultStr":"{\"title\":\"Failures in the Loop: Human Leadership in AI-Based Decision-Making\",\"authors\":\"Katina Michael;Jordan Richard Schoenherr;Kathleen M. Vogel\",\"doi\":\"10.1109/TTS.2024.3378587\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The dark side of AI has been a persistent focus in discussions of popular science and academia (Appendix A), with some claiming that AI is “evil” \\n<xref>[1]</xref>\\n. Many commentators make compelling arguments for their concerns. Techno-elites have also contributed to the polarization of these discussions, with ultimatums that in this new era of industrialized AI, citizens will need to “[join] with the AI or risk being left behind” \\n<xref>[2]</xref>\\n. With such polarizing language, debates about AI adoption run the risk of being oversimplified. Discussion of technological trust frequently takes an \\n<italic>all-or-nothing</i>\\n approach. All technologies – cognitive, social, material, or digital – introduce tradeoffs when they are adopted, and contain both ‘light and dark’ features \\n<xref>[3]</xref>\\n. But descriptions of these features can take on deceptively (or unintentionally) anthropomorphic tones, especially when stakeholders refer to the features as ‘agents’ \\n<xref>[4]</xref>\\n, \\n<xref>[5]</xref>\\n. When used as an analogical heuristic, this can inform the design of AI, provide knowledge for AI operations, and potentially even predict its outcomes \\n<xref>[6]</xref>\\n. However, if AI agency is accepted at face value, we run the risk of having unrealistic expectations for the capabilities of these systems.\",\"PeriodicalId\":73324,\"journal\":{\"name\":\"IEEE transactions on technology and society\",\"volume\":\"5 1\",\"pages\":\"2-13\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10539317\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on technology and society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10539317/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on technology and society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10539317/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

人工智能的阴暗面一直是科普和学术界讨论的焦点(附录 A),有些人声称人工智能是 "邪恶的"[1]。许多评论家为他们的担忧提出了令人信服的论据。技术精英也助长了这些讨论的两极分化,他们最后通牒说,在人工智能工业化的新时代,公民需要 "与人工智能一起,否则就有被抛在后面的危险"[2]。在这种两极分化的语言下,关于采用人工智能的辩论有被过度简化的风险。关于技术信任的讨论经常采用一种全有或全无的方法。所有技术--认知技术、社会技术、物质技术或数字技术--在被采用时都会带来权衡,并包含 "光明与黑暗 "的特征[3]。但是,对这些特征的描述可能会带有欺骗性(或无意的)拟人化色彩,尤其是当利益相关者将这些特征称为 "代理 "时[4],[5]。当用作类比启发式时,这可以为人工智能的设计提供信息,为人工智能的运行提供知识,甚至有可能预测其结果[6]。然而,如果人工智能代理的表面价值被接受,我们就有可能对这些系统的能力抱有不切实际的期望。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Failures in the Loop: Human Leadership in AI-Based Decision-Making
The dark side of AI has been a persistent focus in discussions of popular science and academia (Appendix A), with some claiming that AI is “evil” [1] . Many commentators make compelling arguments for their concerns. Techno-elites have also contributed to the polarization of these discussions, with ultimatums that in this new era of industrialized AI, citizens will need to “[join] with the AI or risk being left behind” [2] . With such polarizing language, debates about AI adoption run the risk of being oversimplified. Discussion of technological trust frequently takes an all-or-nothing approach. All technologies – cognitive, social, material, or digital – introduce tradeoffs when they are adopted, and contain both ‘light and dark’ features [3] . But descriptions of these features can take on deceptively (or unintentionally) anthropomorphic tones, especially when stakeholders refer to the features as ‘agents’ [4] , [5] . When used as an analogical heuristic, this can inform the design of AI, provide knowledge for AI operations, and potentially even predict its outcomes [6] . However, if AI agency is accepted at face value, we run the risk of having unrealistic expectations for the capabilities of these systems.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信