{"title":"为人工智能无法保险的风险提供保险:国家作为最后的保险人","authors":"Cristian Trout","doi":"arxiv-2409.06672","DOIUrl":null,"url":null,"abstract":"Many experts believe that AI systems will sooner or later pose uninsurable\nrisks, including existential risks. This creates an extreme judgment-proof\nproblem: few if any parties can be held accountable ex post in the event of\nsuch a catastrophe. This paper proposes a novel solution: a\ngovernment-provided, mandatory indemnification program for AI developers. The\nprogram uses risk-priced indemnity fees to induce socially optimal levels of\ncare. Risk-estimates are determined by surveying experts, including indemnified\ndevelopers. The Bayesian Truth Serum mechanism is employed to incent honest and\neffortful responses. Compared to alternatives, this approach arguably better\nleverages all private information, and provides a clearer signal to indemnified\ndevelopers regarding what risks they must mitigate to lower their fees. It's\nrecommended that collected fees be used to help fund the safety research\ndevelopers need, employing a fund matching mechanism (Quadratic Financing) to\ninduce an optimal supply of this public good. Under Quadratic Financing, safety\nresearch projects would compete for private contributions from developers,\nsignaling how much each is to be supplemented with public funds.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"66 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Insuring Uninsurable Risks from AI: The State as Insurer of Last Resort\",\"authors\":\"Cristian Trout\",\"doi\":\"arxiv-2409.06672\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Many experts believe that AI systems will sooner or later pose uninsurable\\nrisks, including existential risks. This creates an extreme judgment-proof\\nproblem: few if any parties can be held accountable ex post in the event of\\nsuch a catastrophe. This paper proposes a novel solution: a\\ngovernment-provided, mandatory indemnification program for AI developers. The\\nprogram uses risk-priced indemnity fees to induce socially optimal levels of\\ncare. Risk-estimates are determined by surveying experts, including indemnified\\ndevelopers. The Bayesian Truth Serum mechanism is employed to incent honest and\\neffortful responses. Compared to alternatives, this approach arguably better\\nleverages all private information, and provides a clearer signal to indemnified\\ndevelopers regarding what risks they must mitigate to lower their fees. It's\\nrecommended that collected fees be used to help fund the safety research\\ndevelopers need, employing a fund matching mechanism (Quadratic Financing) to\\ninduce an optimal supply of this public good. Under Quadratic Financing, safety\\nresearch projects would compete for private contributions from developers,\\nsignaling how much each is to be supplemented with public funds.\",\"PeriodicalId\":501112,\"journal\":{\"name\":\"arXiv - CS - Computers and Society\",\"volume\":\"66 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computers and Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.06672\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computers and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06672","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Insuring Uninsurable Risks from AI: The State as Insurer of Last Resort
Many experts believe that AI systems will sooner or later pose uninsurable
risks, including existential risks. This creates an extreme judgment-proof
problem: few if any parties can be held accountable ex post in the event of
such a catastrophe. This paper proposes a novel solution: a
government-provided, mandatory indemnification program for AI developers. The
program uses risk-priced indemnity fees to induce socially optimal levels of
care. Risk-estimates are determined by surveying experts, including indemnified
developers. The Bayesian Truth Serum mechanism is employed to incent honest and
effortful responses. Compared to alternatives, this approach arguably better
leverages all private information, and provides a clearer signal to indemnified
developers regarding what risks they must mitigate to lower their fees. It's
recommended that collected fees be used to help fund the safety research
developers need, employing a fund matching mechanism (Quadratic Financing) to
induce an optimal supply of this public good. Under Quadratic Financing, safety
research projects would compete for private contributions from developers,
signaling how much each is to be supplemented with public funds.