{"title":"Group prioritarianism: why AI should not replace humanity","authors":"Frank Hong","doi":"10.1007/s11098-024-02189-5","DOIUrl":null,"url":null,"abstract":"<p>If a future AI system can enjoy far more well-being than a human per resource, what would be the best way to allocate resources between these future AI and our future descendants? It is obvious that on total utilitarianism, one should give everything to the AI. However, it turns out that every Welfarist axiology on the market also gives this same recommendation, at least if we assume consequentialism. Without resorting to non-consequentialist normative theories that suggest that we ought not always create the world with the most <i>value</i>, or non-welfarist theories that tell us that the best world may not be the world with the most <i>welfare</i>, I propose a new theory that justifies giving some resources to humanity in the face of overwhelming AI well-being. I call this new theory, “Group Prioritarianism\".</p>","PeriodicalId":48305,"journal":{"name":"PHILOSOPHICAL STUDIES","volume":null,"pages":null},"PeriodicalIF":1.1000,"publicationDate":"2024-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PHILOSOPHICAL STUDIES","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11098-024-02189-5","RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"PHILOSOPHY","Score":null,"Total":0}
引用次数: 0
Abstract
If a future AI system can enjoy far more well-being than a human per resource, what would be the best way to allocate resources between these future AI and our future descendants? It is obvious that on total utilitarianism, one should give everything to the AI. However, it turns out that every Welfarist axiology on the market also gives this same recommendation, at least if we assume consequentialism. Without resorting to non-consequentialist normative theories that suggest that we ought not always create the world with the most value, or non-welfarist theories that tell us that the best world may not be the world with the most welfare, I propose a new theory that justifies giving some resources to humanity in the face of overwhelming AI well-being. I call this new theory, “Group Prioritarianism".
期刊介绍:
Philosophical Studies was founded in 1950 by Herbert Feigl and Wilfrid Sellars to provide a periodical dedicated to work in analytic philosophy. The journal remains devoted to the publication of papers in exclusively analytic philosophy. Papers applying formal techniques to philosophical problems are welcome. The principal aim is to publish articles that are models of clarity and precision in dealing with significant philosophical issues. It is intended that readers of the journal will be kept abreast of the central issues and problems of contemporary analytic philosophy.
Double-blind review procedure
The journal follows a double-blind reviewing procedure. Authors are therefore requested to place their name and affiliation on a separate page. Self-identifying citations and references in the article text should either be avoided or left blank when manuscripts are first submitted. Authors are responsible for reinserting self-identifying citations and references when manuscripts are prepared for final submission.