{"title":"人工智能安全框架评分标准","authors":"Jide Alaga, Jonas Schuett, Markus Anderljung","doi":"arxiv-2409.08751","DOIUrl":null,"url":null,"abstract":"Over the past year, artificial intelligence (AI) companies have been\nincreasingly adopting AI safety frameworks. These frameworks outline how\ncompanies intend to keep the potential risks associated with developing and\ndeploying frontier AI systems to an acceptable level. Major players like\nAnthropic, OpenAI, and Google DeepMind have already published their frameworks,\nwhile another 13 companies have signaled their intent to release similar\nframeworks by February 2025. Given their central role in AI companies' efforts\nto identify and address unacceptable risks from their systems, AI safety\nframeworks warrant significant scrutiny. To enable governments, academia, and\ncivil society to pass judgment on these frameworks, this paper proposes a\ngrading rubric. The rubric consists of seven evaluation criteria and 21\nindicators that concretize the criteria. Each criterion can be graded on a\nscale from A (gold standard) to F (substandard). The paper also suggests three\nmethods for applying the rubric: surveys, Delphi studies, and audits. The\npurpose of the grading rubric is to enable nuanced comparisons between\nframeworks, identify potential areas of improvement, and promote a race to the\ntop in responsible AI development.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"7 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Grading Rubric for AI Safety Frameworks\",\"authors\":\"Jide Alaga, Jonas Schuett, Markus Anderljung\",\"doi\":\"arxiv-2409.08751\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Over the past year, artificial intelligence (AI) companies have been\\nincreasingly adopting AI safety frameworks. These frameworks outline how\\ncompanies intend to keep the potential risks associated with developing and\\ndeploying frontier AI systems to an acceptable level. Major players like\\nAnthropic, OpenAI, and Google DeepMind have already published their frameworks,\\nwhile another 13 companies have signaled their intent to release similar\\nframeworks by February 2025. Given their central role in AI companies' efforts\\nto identify and address unacceptable risks from their systems, AI safety\\nframeworks warrant significant scrutiny. To enable governments, academia, and\\ncivil society to pass judgment on these frameworks, this paper proposes a\\ngrading rubric. The rubric consists of seven evaluation criteria and 21\\nindicators that concretize the criteria. Each criterion can be graded on a\\nscale from A (gold standard) to F (substandard). The paper also suggests three\\nmethods for applying the rubric: surveys, Delphi studies, and audits. The\\npurpose of the grading rubric is to enable nuanced comparisons between\\nframeworks, identify potential areas of improvement, and promote a race to the\\ntop in responsible AI development.\",\"PeriodicalId\":501112,\"journal\":{\"name\":\"arXiv - CS - Computers and Society\",\"volume\":\"7 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computers and Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.08751\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computers and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08751","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Over the past year, artificial intelligence (AI) companies have been
increasingly adopting AI safety frameworks. These frameworks outline how
companies intend to keep the potential risks associated with developing and
deploying frontier AI systems to an acceptable level. Major players like
Anthropic, OpenAI, and Google DeepMind have already published their frameworks,
while another 13 companies have signaled their intent to release similar
frameworks by February 2025. Given their central role in AI companies' efforts
to identify and address unacceptable risks from their systems, AI safety
frameworks warrant significant scrutiny. To enable governments, academia, and
civil society to pass judgment on these frameworks, this paper proposes a
grading rubric. The rubric consists of seven evaluation criteria and 21
indicators that concretize the criteria. Each criterion can be graded on a
scale from A (gold standard) to F (substandard). The paper also suggests three
methods for applying the rubric: surveys, Delphi studies, and audits. The
purpose of the grading rubric is to enable nuanced comparisons between
frameworks, identify potential areas of improvement, and promote a race to the
top in responsible AI development.