{"title":"Vagueness without truth functionality? No worries","authors":"Bret Donnelly","doi":"10.1007/s11098-025-02318-8","DOIUrl":"https://doi.org/10.1007/s11098-025-02318-8","url":null,"abstract":"<p>Among theories of vagueness, supervaluationism stands out for its non–truth functional account of the logical connectives. For example, the disjunction of two atomic statements that are not determinately true or false can, itself, come out <i>either</i> true or indeterminate, depending on its content—a consequence several philosophers find problematic. Smith (2016) turns this point against supervaluationism most pressingly, arguing that truth functionality is <i>essential</i> to any adequate model of truth. But this conclusion is too strong. Here, I argue that the problem with standard forms of supervaluationism is not the failure of truth functionality per se, but rather that they lack the structural resources necessary to <i>algorithmically</i> assign truth values to sentences based on their respective subject matters. However, recent developments of supervaluationism, which draw upon the cognitive science framework of conceptual spaces, resolve this issue. By incorporating conceptual information directly into their model-theoretic representations of the subject matters of sentences, these newer frameworks retain sensitivity to conceptual relations while providing consistent, content-based valuations of truth. Hence, their lack of truth functionality is nothing to worry about.</p>","PeriodicalId":48305,"journal":{"name":"PHILOSOPHICAL STUDIES","volume":"11 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143736948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Two types of AI existential risk: decisive and accumulative","authors":"Atoosa Kasirzadeh","doi":"10.1007/s11098-025-02301-3","DOIUrl":"https://doi.org/10.1007/s11098-025-02301-3","url":null,"abstract":"<p>The conventional discourse on existential risks (x-risks) from AI typically focuses on abrupt, dire events caused by advanced AI systems, particularly those that might achieve or surpass human-level intelligence. These events have severe consequences that either lead to human extinction or irreversibly cripple human civilization to a point beyond recovery. This decisive view, however, often neglects the serious possibility of AI x-risk manifesting gradually through an incremental series of smaller yet interconnected disruptions, crossing critical thresholds over time. This paper contrasts the conventional <i>decisive AI x-risk hypothesis</i> with what I call an <i>accumulative AI x-risk hypothesis</i>. While the former envisions an overt AI takeover pathway, characterized by scenarios like uncontrollable superintelligence, the latter suggests a different pathway to existential catastrophes. This involves a gradual accumulation of AI-induced threats such as severe vulnerabilities and systemic erosion of critical economic and political structures. The accumulative hypothesis suggests a boiling frog scenario where incremental AI risks slowly undermine systemic and societal resilience until a triggering event results in irreversible collapse. Through complex systems analysis, this paper examines the distinct assumptions differentiating these two hypotheses. It is then argued that the accumulative view can reconcile seemingly incompatible perspectives on AI risks. The implications of differentiating between the two types of pathway—the decisive and the accumulative—for the governance of AI as well as long-term AI safety are discussed.</p>","PeriodicalId":48305,"journal":{"name":"PHILOSOPHICAL STUDIES","volume":"183 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143736949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A matter of principle? AI alignment as the fair treatment of claims","authors":"Iason Gabriel, Geoff Keeling","doi":"10.1007/s11098-025-02300-4","DOIUrl":"https://doi.org/10.1007/s11098-025-02300-4","url":null,"abstract":"<p>The normative challenge of AI alignment centres upon what goals or values ought to be encoded in AI systems to govern their behaviour. A number of answers have been proposed, including the notion that AI must be aligned with human intentions or that it should aim to be helpful, honest and harmless. Nonetheless, both accounts suffer from critical weaknesses. On the one hand, they are incomplete: neither specification provides adequate guidance to AI systems, deployed across various domains with multiple parties. On the other hand, the justification for these approaches is questionable and, we argue, of the wrong kind. More specifically, neither approach takes seriously the need to justify the operation of AI systems to those affected by their actions – or what this means for pluralistic societies where people have different underlying beliefs about value. To address these limitations, we propose an alternative account of AI alignment that focuses on fair processes. We argue that principles that are the product of these processes are the appropriate target for alignment. This approach can meet the necessary standard of public justification, generate a fuller set of principles for AI that are sensitive to variation in context, and has explanatory power insofar as it makes sense of our intuitions about AI systems and points to a number of hitherto underappreciated ways in which an AI system may cease to be aligned.</p>","PeriodicalId":48305,"journal":{"name":"PHILOSOPHICAL STUDIES","volume":"72 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143736950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Grounding, contingentism, and the reduction of metaphysical necessity to essence","authors":"Qichen Yan","doi":"10.1007/s11098-025-02309-9","DOIUrl":"https://doi.org/10.1007/s11098-025-02309-9","url":null,"abstract":"<p>Teitel (Mind 128:39-68, 2019) argues that the following three doctrines are jointly inconsistent: i) the doctrine that metaphysical necessity reduces to essence; ii) the doctrine that possibly something could fail to exist; and iii) the doctrine that metaphysical necessity obeys a modal logic of at least S4. This paper presents a novel solution to Teitel’s puzzle, regimented in a higher-order logical setting, which is crucially based on the idea that the putative reduction of metaphysical necessity to essence should be understood through appealing to some hyperintensional notion—such as grounding or real definition—rather than the notion of identity/identification. Moreover, it will also be shown that the proposed reductive account has a significant advantage over its rival account.</p>","PeriodicalId":48305,"journal":{"name":"PHILOSOPHICAL STUDIES","volume":"96 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143695281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Liberal legitimacy and future citizens","authors":"Emil Andersson","doi":"10.1007/s11098-025-02308-w","DOIUrl":"https://doi.org/10.1007/s11098-025-02308-w","url":null,"abstract":"<p>If the legitimate exercise of political power requires justifiability to all citizens, as John Rawls’s influential <i>Liberal Principle of Legitimacy</i> states, then what should we say about the legitimacy of institutions and actions that have a significant impact on the interests of future citizens? Surprisingly, this question has been neglected in the literature. This paper questions the assumption that it is only justifiability to presently existing citizens that matters, and provides reasons for thinking that legitimacy requires justifiability to future citizens as well. Further, it is argued that the presently dominant interpretation of Rawls’s principle is unable to take future citizens into account in an adequate way. Therefore, the inclusion of these citizens among those to whom justifiability is owed gives us good reasons to reject this interpretation, and to adopt a different understanding of the view.</p>","PeriodicalId":48305,"journal":{"name":"PHILOSOPHICAL STUDIES","volume":"94 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143672505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the accuracy and aptness of suspension","authors":"Sven Bernecker, Luis Rosa","doi":"10.1007/s11098-025-02306-y","DOIUrl":"https://doi.org/10.1007/s11098-025-02306-y","url":null,"abstract":"<p>This paper challenges Sosa’s account of the epistemic propriety of suspension of judgment. We take the reader on a test drive through some common problem cases in epistemology and argue that Sosa makes accurate and apt suspension both too easy and too hard.</p>","PeriodicalId":48305,"journal":{"name":"PHILOSOPHICAL STUDIES","volume":"37 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143672506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Knowledge by acquaintance & impartial virtue","authors":"Emad H. Atiq","doi":"10.1007/s11098-025-02289-w","DOIUrl":"https://doi.org/10.1007/s11098-025-02289-w","url":null,"abstract":"<p>Russell (Proc Aristot Soc 11:108–128, 1911; The Problems of Philosophy, Thornton Butterworth Limited, London, 1912) argued that perceptual experience grounds a species of non-propositional knowledge, “knowledge by acquaintance,” and in recent years, this account of knowledge has been gaining traction. I defend on its basis a connection between moral and epistemic failure. I argue, first, that insufficient concern for the suffering of others can be explained in terms of an agent’s lack of acquaintance knowledge of another’s suffering, and second, that empathy improves our epistemic situation. Empathic distress approximates acquaintance with another’s suffering, and empathic agents who are motivated to help rather than disengage exhibit an important epistemic virtue: a variety of intellectual courage. A key upshot is that an independently motivated account of the structure and significance of perceptual experience is shown to provide theoretical scaffolding for understanding a famously elusive idea in ethics—namely, that the failure to help others stems from a kind of ignorance of their situation.</p>","PeriodicalId":48305,"journal":{"name":"PHILOSOPHICAL STUDIES","volume":"18 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143618386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Can AI make scientific discoveries?","authors":"Marianna Bergamaschi Ganapini","doi":"10.1007/s11098-025-02299-8","DOIUrl":"https://doi.org/10.1007/s11098-025-02299-8","url":null,"abstract":"<p>AI technologies have shown remarkable capabilities in various scientific fields, such as drug discovery, medicine, climate modeling, and archaeology, primarily through their pattern recognition abilities. They can also generate hypotheses and suggest new research directions. While acknowledging AI’s potential to aid in scientific breakthroughs, the paper shows that current AI models do not meet the criteria for making independent scientific discoveries. Discovery is seen as an epistemic achievement that requires a level of competence and self-awareness that AI does not yet possess.</p>","PeriodicalId":48305,"journal":{"name":"PHILOSOPHICAL STUDIES","volume":"41 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143618363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Thank you for misunderstanding!","authors":"Collin Rice, Kareem Khalifa","doi":"10.1007/s11098-025-02311-1","DOIUrl":"https://doi.org/10.1007/s11098-025-02311-1","url":null,"abstract":"<p>This paper examines cases in which an individual’s misunderstanding improves the scientific community’s understanding through “corrective” processes that produce understanding from poor epistemic inputs. To highlight the unique features of valuable misunderstandings and corrective processes, we contrast them with other social-epistemological phenomena including testimonial understanding, collective understanding, Longino’s critical contextual empiricism, and knowledge from falsehoods.</p>","PeriodicalId":48305,"journal":{"name":"PHILOSOPHICAL STUDIES","volume":"15 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143618326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On being good friends with a bad person","authors":"Yiran Hua","doi":"10.1007/s11098-025-02294-z","DOIUrl":"https://doi.org/10.1007/s11098-025-02294-z","url":null,"abstract":"<p>Many philosophers believe that it counts against one morally if one is close and good friends with a bad person. Some argue that one acts badly by counting a bad person as a good friend, because such friendships carry significant moral risks. Others locate the moral badness in one’s moral psychology, suggesting that one becomes objectionably complacent by being good friends with a bad person. In this paper, I argue that none of these accounts are plausible. In fact, I propose that the starting intuition, that there is something <i>pro tanto</i> morally bad in being close and good friends with a bad person, does not track ethical reality. A person’s friend list isn’t at all in-principle informative of a person’s moral character. I also diagnose why we nonetheless have this mistaken intuition. I propose that friendships are <i>fragmented</i> in two crucial aspects. Once we observe these fragmentations, our initially mistaken intuition completely goes away.</p>","PeriodicalId":48305,"journal":{"name":"PHILOSOPHICAL STUDIES","volume":"24 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}