原则问题?AI对齐作为公平对待索赔

IF 1.1 1区 哲学 0 PHILOSOPHY
Iason Gabriel, Geoff Keeling
{"title":"原则问题?AI对齐作为公平对待索赔","authors":"Iason Gabriel, Geoff Keeling","doi":"10.1007/s11098-025-02300-4","DOIUrl":null,"url":null,"abstract":"<p>The normative challenge of AI alignment centres upon what goals or values ought to be encoded in AI systems to govern their behaviour. A number of answers have been proposed, including the notion that AI must be aligned with human intentions or that it should aim to be helpful, honest and harmless. Nonetheless, both accounts suffer from critical weaknesses. On the one hand, they are incomplete: neither specification provides adequate guidance to AI systems, deployed across various domains with multiple parties. On the other hand, the justification for these approaches is questionable and, we argue, of the wrong kind. More specifically, neither approach takes seriously the need to justify the operation of AI systems to those affected by their actions – or what this means for pluralistic societies where people have different underlying beliefs about value. To address these limitations, we propose an alternative account of AI alignment that focuses on fair processes. We argue that principles that are the product of these processes are the appropriate target for alignment. This approach can meet the necessary standard of public justification, generate a fuller set of principles for AI that are sensitive to variation in context, and has explanatory power insofar as it makes sense of our intuitions about AI systems and points to a number of hitherto underappreciated ways in which an AI system may cease to be aligned.</p>","PeriodicalId":48305,"journal":{"name":"PHILOSOPHICAL STUDIES","volume":"72 1","pages":""},"PeriodicalIF":1.1000,"publicationDate":"2025-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A matter of principle? AI alignment as the fair treatment of claims\",\"authors\":\"Iason Gabriel, Geoff Keeling\",\"doi\":\"10.1007/s11098-025-02300-4\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The normative challenge of AI alignment centres upon what goals or values ought to be encoded in AI systems to govern their behaviour. A number of answers have been proposed, including the notion that AI must be aligned with human intentions or that it should aim to be helpful, honest and harmless. Nonetheless, both accounts suffer from critical weaknesses. On the one hand, they are incomplete: neither specification provides adequate guidance to AI systems, deployed across various domains with multiple parties. On the other hand, the justification for these approaches is questionable and, we argue, of the wrong kind. More specifically, neither approach takes seriously the need to justify the operation of AI systems to those affected by their actions – or what this means for pluralistic societies where people have different underlying beliefs about value. To address these limitations, we propose an alternative account of AI alignment that focuses on fair processes. We argue that principles that are the product of these processes are the appropriate target for alignment. This approach can meet the necessary standard of public justification, generate a fuller set of principles for AI that are sensitive to variation in context, and has explanatory power insofar as it makes sense of our intuitions about AI systems and points to a number of hitherto underappreciated ways in which an AI system may cease to be aligned.</p>\",\"PeriodicalId\":48305,\"journal\":{\"name\":\"PHILOSOPHICAL STUDIES\",\"volume\":\"72 1\",\"pages\":\"\"},\"PeriodicalIF\":1.1000,\"publicationDate\":\"2025-03-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"PHILOSOPHICAL STUDIES\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s11098-025-02300-4\",\"RegionNum\":1,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"PHILOSOPHY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"PHILOSOPHICAL STUDIES","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11098-025-02300-4","RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"PHILOSOPHY","Score":null,"Total":0}
引用次数: 0

摘要

人工智能校准的规范性挑战集中在人工智能系统中应该编码哪些目标或价值观来管理它们的行为。人们提出了许多答案,包括人工智能必须与人类的意图保持一致,或者它应该以帮助、诚实和无害为目标。尽管如此,这两种说法都存在严重缺陷。一方面,它们是不完整的:两个规范都没有为部署在多个领域的人工智能系统提供足够的指导。另一方面,这些方法的理由值得怀疑,而且我们认为是错误的。更具体地说,这两种方法都没有认真考虑有必要向受其行为影响的人证明人工智能系统的运行是合理的,或者这对人们对价值有不同潜在信念的多元社会意味着什么。为了解决这些限制,我们提出了一种关注公平过程的人工智能对齐的替代解释。我们认为,作为这些过程产物的原则是合适的校准目标。这种方法可以满足公众辩护的必要标准,为人工智能生成一套更完整的原则,这些原则对环境的变化很敏感,并且具有解释力,因为它使我们对人工智能系统的直觉有意义,并指出了一些迄今为止尚未得到充分认识的人工智能系统可能不再一致的方式。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A matter of principle? AI alignment as the fair treatment of claims

The normative challenge of AI alignment centres upon what goals or values ought to be encoded in AI systems to govern their behaviour. A number of answers have been proposed, including the notion that AI must be aligned with human intentions or that it should aim to be helpful, honest and harmless. Nonetheless, both accounts suffer from critical weaknesses. On the one hand, they are incomplete: neither specification provides adequate guidance to AI systems, deployed across various domains with multiple parties. On the other hand, the justification for these approaches is questionable and, we argue, of the wrong kind. More specifically, neither approach takes seriously the need to justify the operation of AI systems to those affected by their actions – or what this means for pluralistic societies where people have different underlying beliefs about value. To address these limitations, we propose an alternative account of AI alignment that focuses on fair processes. We argue that principles that are the product of these processes are the appropriate target for alignment. This approach can meet the necessary standard of public justification, generate a fuller set of principles for AI that are sensitive to variation in context, and has explanatory power insofar as it makes sense of our intuitions about AI systems and points to a number of hitherto underappreciated ways in which an AI system may cease to be aligned.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
PHILOSOPHICAL STUDIES
PHILOSOPHICAL STUDIES PHILOSOPHY-
CiteScore
2.60
自引率
7.70%
发文量
127
期刊介绍: Philosophical Studies was founded in 1950 by Herbert Feigl and Wilfrid Sellars to provide a periodical dedicated to work in analytic philosophy. The journal remains devoted to the publication of papers in exclusively analytic philosophy. Papers applying formal techniques to philosophical problems are welcome. The principal aim is to publish articles that are models of clarity and precision in dealing with significant philosophical issues. It is intended that readers of the journal will be kept abreast of the central issues and problems of contemporary analytic philosophy. Double-blind review procedure The journal follows a double-blind reviewing procedure. Authors are therefore requested to place their name and affiliation on a separate page. Self-identifying citations and references in the article text should either be avoided or left blank when manuscripts are first submitted. Authors are responsible for reinserting self-identifying citations and references when manuscripts are prepared for final submission.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信