确保人工智能安全:易错知识与格莱斯格言

Marten H. L. Kaas, Ibrahim Habli
{"title":"确保人工智能安全:易错知识与格莱斯格言","authors":"Marten H. L. Kaas,&nbsp;Ibrahim Habli","doi":"10.1007/s43681-024-00490-x","DOIUrl":null,"url":null,"abstract":"<div><p>In this paper we argue that safety claims, when justified by a safety case, are descriptive fallible knowledge claims. Even if the aim of a safety case was to justify infallible knowledge about the safety of a system, such infallible safety knowledge is impossible to attain in the case of AI-enabled systems. By their nature AI-enabled systems preclude the possibility of obtaining infallible knowledge concerning their safety or lack thereof. We suggest that one can communicate knowledge of an AI-enabled system’s safety by structuring their exchange according to Paul Grice’s Cooperative Principle which can be achieved via adherence to the Gricean maxims of communication. Furthermore, these same maxims can be used to evaluate the calibre of the exchange, with the aim being to ensure that communicating knowledge about an AI-enabled system’s safety is of the highest calibre, in short, that the communication is relevant, of sufficient quantity and quality, and communicated perspicuously. The high calibre communication of safety claims to an epistemically diverse group of stakeholders is vitally important given the increasingly participatory nature of AI-enabled system design, development and assessment.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 2","pages":"1467 - 1480"},"PeriodicalIF":0.0000,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00490-x.pdf","citationCount":"0","resultStr":"{\"title\":\"Assuring AI safety: fallible knowledge and the Gricean maxims\",\"authors\":\"Marten H. L. Kaas,&nbsp;Ibrahim Habli\",\"doi\":\"10.1007/s43681-024-00490-x\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>In this paper we argue that safety claims, when justified by a safety case, are descriptive fallible knowledge claims. Even if the aim of a safety case was to justify infallible knowledge about the safety of a system, such infallible safety knowledge is impossible to attain in the case of AI-enabled systems. By their nature AI-enabled systems preclude the possibility of obtaining infallible knowledge concerning their safety or lack thereof. We suggest that one can communicate knowledge of an AI-enabled system’s safety by structuring their exchange according to Paul Grice’s Cooperative Principle which can be achieved via adherence to the Gricean maxims of communication. Furthermore, these same maxims can be used to evaluate the calibre of the exchange, with the aim being to ensure that communicating knowledge about an AI-enabled system’s safety is of the highest calibre, in short, that the communication is relevant, of sufficient quantity and quality, and communicated perspicuously. The high calibre communication of safety claims to an epistemically diverse group of stakeholders is vitally important given the increasingly participatory nature of AI-enabled system design, development and assessment.</p></div>\",\"PeriodicalId\":72137,\"journal\":{\"name\":\"AI and ethics\",\"volume\":\"5 2\",\"pages\":\"1467 - 1480\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-05-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s43681-024-00490-x.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AI and ethics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s43681-024-00490-x\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-024-00490-x","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在本文中,我们认为,安全声明,当证明安全的情况下,是描述性的可错的知识声明。即使安全案例的目的是证明有关系统安全的可靠知识是正确的,但在启用人工智能的系统中,这种可靠的安全知识是不可能实现的。就其本质而言,人工智能支持的系统排除了获得有关其安全或缺乏安全的可靠知识的可能性。我们建议,人们可以根据Paul Grice的合作原则来构建他们的交流,从而交流人工智能系统的安全性知识,这可以通过遵守Grice的沟通准则来实现。此外,这些相同的准则可用于评估交流的水平,其目的是确保有关人工智能支持的系统安全的知识交流是最高水平的,简而言之,沟通是相关的,有足够的数量和质量,并且沟通清晰。鉴于人工智能支持的系统设计、开发和评估的参与性日益增强,与认知上多样化的利益相关者群体进行高水平的安全声明沟通至关重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Assuring AI safety: fallible knowledge and the Gricean maxims

In this paper we argue that safety claims, when justified by a safety case, are descriptive fallible knowledge claims. Even if the aim of a safety case was to justify infallible knowledge about the safety of a system, such infallible safety knowledge is impossible to attain in the case of AI-enabled systems. By their nature AI-enabled systems preclude the possibility of obtaining infallible knowledge concerning their safety or lack thereof. We suggest that one can communicate knowledge of an AI-enabled system’s safety by structuring their exchange according to Paul Grice’s Cooperative Principle which can be achieved via adherence to the Gricean maxims of communication. Furthermore, these same maxims can be used to evaluate the calibre of the exchange, with the aim being to ensure that communicating knowledge about an AI-enabled system’s safety is of the highest calibre, in short, that the communication is relevant, of sufficient quantity and quality, and communicated perspicuously. The high calibre communication of safety claims to an epistemically diverse group of stakeholders is vitally important given the increasingly participatory nature of AI-enabled system design, development and assessment.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信