人工智能安全框架评分标准

Jide Alaga, Jonas Schuett, Markus Anderljung
{"title":"人工智能安全框架评分标准","authors":"Jide Alaga, Jonas Schuett, Markus Anderljung","doi":"arxiv-2409.08751","DOIUrl":null,"url":null,"abstract":"Over the past year, artificial intelligence (AI) companies have been\nincreasingly adopting AI safety frameworks. These frameworks outline how\ncompanies intend to keep the potential risks associated with developing and\ndeploying frontier AI systems to an acceptable level. Major players like\nAnthropic, OpenAI, and Google DeepMind have already published their frameworks,\nwhile another 13 companies have signaled their intent to release similar\nframeworks by February 2025. Given their central role in AI companies' efforts\nto identify and address unacceptable risks from their systems, AI safety\nframeworks warrant significant scrutiny. To enable governments, academia, and\ncivil society to pass judgment on these frameworks, this paper proposes a\ngrading rubric. The rubric consists of seven evaluation criteria and 21\nindicators that concretize the criteria. Each criterion can be graded on a\nscale from A (gold standard) to F (substandard). The paper also suggests three\nmethods for applying the rubric: surveys, Delphi studies, and audits. The\npurpose of the grading rubric is to enable nuanced comparisons between\nframeworks, identify potential areas of improvement, and promote a race to the\ntop in responsible AI development.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"7 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Grading Rubric for AI Safety Frameworks\",\"authors\":\"Jide Alaga, Jonas Schuett, Markus Anderljung\",\"doi\":\"arxiv-2409.08751\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Over the past year, artificial intelligence (AI) companies have been\\nincreasingly adopting AI safety frameworks. These frameworks outline how\\ncompanies intend to keep the potential risks associated with developing and\\ndeploying frontier AI systems to an acceptable level. Major players like\\nAnthropic, OpenAI, and Google DeepMind have already published their frameworks,\\nwhile another 13 companies have signaled their intent to release similar\\nframeworks by February 2025. Given their central role in AI companies' efforts\\nto identify and address unacceptable risks from their systems, AI safety\\nframeworks warrant significant scrutiny. To enable governments, academia, and\\ncivil society to pass judgment on these frameworks, this paper proposes a\\ngrading rubric. The rubric consists of seven evaluation criteria and 21\\nindicators that concretize the criteria. Each criterion can be graded on a\\nscale from A (gold standard) to F (substandard). The paper also suggests three\\nmethods for applying the rubric: surveys, Delphi studies, and audits. The\\npurpose of the grading rubric is to enable nuanced comparisons between\\nframeworks, identify potential areas of improvement, and promote a race to the\\ntop in responsible AI development.\",\"PeriodicalId\":501112,\"journal\":{\"name\":\"arXiv - CS - Computers and Society\",\"volume\":\"7 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computers and Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.08751\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computers and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08751","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

过去一年,人工智能(AI)公司越来越多地采用人工智能安全框架。这些框架概述了公司打算如何将与开发和部署前沿人工智能系统相关的潜在风险控制在可接受的水平。Anthropic、OpenAI和谷歌DeepMind等主要公司已经发布了自己的框架,另有13家公司表示打算在2025年2月前发布类似的框架。鉴于人工智能安全框架在人工智能公司识别和解决其系统不可接受的风险方面所起的核心作用,人工智能安全框架应受到严格审查。为了让政府、学术界和民间社会能够对这些框架做出判断,本文提出了一个评估标准。该评分标准由 7 个评估标准和 21 个具体化标准的指标组成。每项标准都可以从 A(金标准)到 F(次标准)进行分级。该文件还提出了应用该评分标准的三种方法:调查、德尔菲研究和审计。评分标准的目的是对不同框架进行细致入微的比较,确定潜在的改进领域,促进负责任的人工智能开发竞赛。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Grading Rubric for AI Safety Frameworks
Over the past year, artificial intelligence (AI) companies have been increasingly adopting AI safety frameworks. These frameworks outline how companies intend to keep the potential risks associated with developing and deploying frontier AI systems to an acceptable level. Major players like Anthropic, OpenAI, and Google DeepMind have already published their frameworks, while another 13 companies have signaled their intent to release similar frameworks by February 2025. Given their central role in AI companies' efforts to identify and address unacceptable risks from their systems, AI safety frameworks warrant significant scrutiny. To enable governments, academia, and civil society to pass judgment on these frameworks, this paper proposes a grading rubric. The rubric consists of seven evaluation criteria and 21 indicators that concretize the criteria. Each criterion can be graded on a scale from A (gold standard) to F (substandard). The paper also suggests three methods for applying the rubric: surveys, Delphi studies, and audits. The purpose of the grading rubric is to enable nuanced comparisons between frameworks, identify potential areas of improvement, and promote a race to the top in responsible AI development.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信