让信任对人工智能更安全?非代理信任作为一个概念工程问题

Q1 Arts and Humanities
Juri Viehoff
{"title":"让信任对人工智能更安全?非代理信任作为一个概念工程问题","authors":"Juri Viehoff","doi":"10.1007/s13347-023-00664-1","DOIUrl":null,"url":null,"abstract":"Abstract Should we be worried that the concept of trust is increasingly used when we assess non-human agents and artefacts, say robots and AI systems? Whilst some authors have developed explanations of the concept of trust with a view to accounting for trust in AI systems and other non-agents, others have rejected the idea that we should extend trust in this way. The article advances this debate by bringing insights from conceptual engineering to bear on this issue. After setting up a target concept of trust in terms of four functional desiderata (trust-reliance distinction, explanatory strength, tracking affective responses, and accounting for distrust), I analyze how agential vs. non-agential accounts can satisfy these. A final section investigates how ‘non-ideal’ circumstances—that is, circumstances where the manifest and operative concept use diverge amongst concept users—affect our choice about which rendering of trust is to be preferred. I suggest that some prominent arguments against extending the language of trust to non-agents are not decisive and reflect on an important oversight in the current debate, namely a failure to address how narrower, agent-centred accounts curtail our ability to distrust non-agents.","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Making Trust Safe for AI? Non-agential Trust as a Conceptual Engineering Problem\",\"authors\":\"Juri Viehoff\",\"doi\":\"10.1007/s13347-023-00664-1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Should we be worried that the concept of trust is increasingly used when we assess non-human agents and artefacts, say robots and AI systems? Whilst some authors have developed explanations of the concept of trust with a view to accounting for trust in AI systems and other non-agents, others have rejected the idea that we should extend trust in this way. The article advances this debate by bringing insights from conceptual engineering to bear on this issue. After setting up a target concept of trust in terms of four functional desiderata (trust-reliance distinction, explanatory strength, tracking affective responses, and accounting for distrust), I analyze how agential vs. non-agential accounts can satisfy these. A final section investigates how ‘non-ideal’ circumstances—that is, circumstances where the manifest and operative concept use diverge amongst concept users—affect our choice about which rendering of trust is to be preferred. I suggest that some prominent arguments against extending the language of trust to non-agents are not decisive and reflect on an important oversight in the current debate, namely a failure to address how narrower, agent-centred accounts curtail our ability to distrust non-agents.\",\"PeriodicalId\":39065,\"journal\":{\"name\":\"Philosophy and Technology\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-09-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Philosophy and Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s13347-023-00664-1\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Arts and Humanities\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Philosophy and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s13347-023-00664-1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Arts and Humanities","Score":null,"Total":0}
引用次数: 0

摘要

当我们评估非人类代理和人工制品(如机器人和人工智能系统)时,我们是否应该担心信任的概念被越来越多地使用?虽然一些作者已经发展了信任概念的解释,以解释人工智能系统和其他非代理的信任,但其他人拒绝了我们应该以这种方式扩展信任的想法。本文通过引入概念工程的见解来解决这个问题,从而推进了这场辩论。在根据四个功能需求(信任-依赖区分、解释强度、跟踪情感反应和考虑不信任)建立信任的目标概念后,我分析了代理与非代理账户如何满足这些需求。最后一节研究了“非理想”情况——即在概念使用者之间明显的和可操作的概念使用出现分歧的情况——如何影响我们对哪种信任呈现的选择。我认为,一些反对将信任的语言扩展到非代理人的突出论点并不是决定性的,而是反映了当前辩论中的一个重要疏忽,即未能解决以代理人为中心的狭隘账户如何削弱了我们不信任非代理人的能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Making Trust Safe for AI? Non-agential Trust as a Conceptual Engineering Problem
Abstract Should we be worried that the concept of trust is increasingly used when we assess non-human agents and artefacts, say robots and AI systems? Whilst some authors have developed explanations of the concept of trust with a view to accounting for trust in AI systems and other non-agents, others have rejected the idea that we should extend trust in this way. The article advances this debate by bringing insights from conceptual engineering to bear on this issue. After setting up a target concept of trust in terms of four functional desiderata (trust-reliance distinction, explanatory strength, tracking affective responses, and accounting for distrust), I analyze how agential vs. non-agential accounts can satisfy these. A final section investigates how ‘non-ideal’ circumstances—that is, circumstances where the manifest and operative concept use diverge amongst concept users—affect our choice about which rendering of trust is to be preferred. I suggest that some prominent arguments against extending the language of trust to non-agents are not decisive and reflect on an important oversight in the current debate, namely a failure to address how narrower, agent-centred accounts curtail our ability to distrust non-agents.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Philosophy and Technology
Philosophy and Technology Arts and Humanities-Philosophy
CiteScore
10.40
自引率
0.00%
发文量
98
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信