Katina Michael;Jordan Richard Schoenherr;Kathleen M. Vogel
{"title":"循环中的失败:人工智能决策中的人类领导力","authors":"Katina Michael;Jordan Richard Schoenherr;Kathleen M. Vogel","doi":"10.1109/TTS.2024.3378587","DOIUrl":null,"url":null,"abstract":"The dark side of AI has been a persistent focus in discussions of popular science and academia (Appendix A), with some claiming that AI is “evil” \n<xref>[1]</xref>\n. Many commentators make compelling arguments for their concerns. Techno-elites have also contributed to the polarization of these discussions, with ultimatums that in this new era of industrialized AI, citizens will need to “[join] with the AI or risk being left behind” \n<xref>[2]</xref>\n. With such polarizing language, debates about AI adoption run the risk of being oversimplified. Discussion of technological trust frequently takes an \n<italic>all-or-nothing</i>\n approach. All technologies – cognitive, social, material, or digital – introduce tradeoffs when they are adopted, and contain both ‘light and dark’ features \n<xref>[3]</xref>\n. But descriptions of these features can take on deceptively (or unintentionally) anthropomorphic tones, especially when stakeholders refer to the features as ‘agents’ \n<xref>[4]</xref>\n, \n<xref>[5]</xref>\n. When used as an analogical heuristic, this can inform the design of AI, provide knowledge for AI operations, and potentially even predict its outcomes \n<xref>[6]</xref>\n. However, if AI agency is accepted at face value, we run the risk of having unrealistic expectations for the capabilities of these systems.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 1","pages":"2-13"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10539317","citationCount":"0","resultStr":"{\"title\":\"Failures in the Loop: Human Leadership in AI-Based Decision-Making\",\"authors\":\"Katina Michael;Jordan Richard Schoenherr;Kathleen M. Vogel\",\"doi\":\"10.1109/TTS.2024.3378587\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The dark side of AI has been a persistent focus in discussions of popular science and academia (Appendix A), with some claiming that AI is “evil” \\n<xref>[1]</xref>\\n. Many commentators make compelling arguments for their concerns. Techno-elites have also contributed to the polarization of these discussions, with ultimatums that in this new era of industrialized AI, citizens will need to “[join] with the AI or risk being left behind” \\n<xref>[2]</xref>\\n. With such polarizing language, debates about AI adoption run the risk of being oversimplified. Discussion of technological trust frequently takes an \\n<italic>all-or-nothing</i>\\n approach. All technologies – cognitive, social, material, or digital – introduce tradeoffs when they are adopted, and contain both ‘light and dark’ features \\n<xref>[3]</xref>\\n. But descriptions of these features can take on deceptively (or unintentionally) anthropomorphic tones, especially when stakeholders refer to the features as ‘agents’ \\n<xref>[4]</xref>\\n, \\n<xref>[5]</xref>\\n. When used as an analogical heuristic, this can inform the design of AI, provide knowledge for AI operations, and potentially even predict its outcomes \\n<xref>[6]</xref>\\n. However, if AI agency is accepted at face value, we run the risk of having unrealistic expectations for the capabilities of these systems.\",\"PeriodicalId\":73324,\"journal\":{\"name\":\"IEEE transactions on technology and society\",\"volume\":\"5 1\",\"pages\":\"2-13\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10539317\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on technology and society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10539317/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on technology and society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10539317/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Failures in the Loop: Human Leadership in AI-Based Decision-Making
The dark side of AI has been a persistent focus in discussions of popular science and academia (Appendix A), with some claiming that AI is “evil”
[1]
. Many commentators make compelling arguments for their concerns. Techno-elites have also contributed to the polarization of these discussions, with ultimatums that in this new era of industrialized AI, citizens will need to “[join] with the AI or risk being left behind”
[2]
. With such polarizing language, debates about AI adoption run the risk of being oversimplified. Discussion of technological trust frequently takes an
all-or-nothing
approach. All technologies – cognitive, social, material, or digital – introduce tradeoffs when they are adopted, and contain both ‘light and dark’ features
[3]
. But descriptions of these features can take on deceptively (or unintentionally) anthropomorphic tones, especially when stakeholders refer to the features as ‘agents’
[4]
,
[5]
. When used as an analogical heuristic, this can inform the design of AI, provide knowledge for AI operations, and potentially even predict its outcomes
[6]
. However, if AI agency is accepted at face value, we run the risk of having unrealistic expectations for the capabilities of these systems.