{"title":"Conceptualising conceptual resilience. A comparative approach.","authors":"Samuela Marchiori, Joseph Sta Maria","doi":"10.1007/s13347-026-01045-0","DOIUrl":"10.1007/s13347-026-01045-0","url":null,"abstract":"<p><p>Much of the existing literature on conceptual engineering in the philosophy of technology has concentrated on identifying when and how concepts are disrupted under pressure, and how such disruptions can be addressed through conceptual engineering interventions. By and large, this literature has predominantly resorted to conceptual engineering as an approach to diagnose and remedy disruption. Recent work by Lundgren (2024) suggests that a shift from restorative to preventative conceptual engineering is warranted: rather than analysing disruptions post hoc, concepts can be deliberately designed to resist disruption from the outset. This paper introduces and develops the notion of <i>conceptual resilience</i> as the capacity of concepts to maintain continuous functional adequacy despite tensions, pressures, or other disturbances. Unlike Lundgren's (2024) account, which frames this phenomenon in terms of <i>conceptual stability</i>, we argue that <i>resilience</i> better accommodates a broader range of modes of resistance to disruption, including those that involve adaptive transformation rather than static continuity. We further argue that conceptual resilience is not a binary property, but a capacity exhibited in degrees. Drawing from interdisciplinary literatures, we introduce two heuristic framings-<i>Conceptual Resilience as Immutability</i> (CRI) and <i>Conceptual Resilience as Adaptability</i> (CRA)-which capture contrasting yet complementary ways in which concepts preserve their functional adequacy under pressure.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"39 1","pages":"45"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12950040/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147345417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Many Faces of Indeterminacy in Interactive Deadbots.","authors":"Atay Kozlovski, Edina Harbinja, Roel Dobbe","doi":"10.1007/s13347-026-01089-2","DOIUrl":"https://doi.org/10.1007/s13347-026-01089-2","url":null,"abstract":"<p><p>Advances in generative AI have given rise to a growing industry centred on interactive representations of deceased individuals. Within this emerging \"digital afterlife industry\", <i>interactive deadbots</i> (IDBs) are presented as hyper-realistic avatars that use a person's likeness, voice, and personal data to simulate conversational interactions with them. Rapidly moving from a niche experiment to a mainstream phenomenon, IDBs are poised to reshape the ethical, social, legal, and governance landscapes surrounding death, mourning, and digital legacy. This paper examines the disruptive nature of IDB technology through a multidisciplinary lens, using the concept of indeterminacy as its guiding analytical framework and a novel way to conceptualise the unstable field. Rather than advancing a unified understanding of indeterminacy, we introduce a structured analytical map and provisional taxonomy that distinguishes technological, social, philosophical, legal, and regulatory manifestations of indeterminacy in IDBs. By offering a tentative and necessarily selective map of this fluid and nascent field, we explore how indeterminacy and IDBs intersect. The paper examines how IDBs amplify existing forms of indeterminacy and how indeterminacy itself shapes the development and use of these systems across five domains: technological, social, philosophical, legal, and regulatory.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"39 2","pages":"74"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13076435/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147692796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Formula of Humanity and AI Use.","authors":"Martin Sticker","doi":"10.1007/s13347-026-01100-w","DOIUrl":"https://doi.org/10.1007/s13347-026-01100-w","url":null,"abstract":"<p><p>Aylsworth and Castro have argued that, following Kant's Formula of Humanity, using ChatGPT to write humanities essays constitutes a violation of a duty to cultivate one's humanity. I first turn to a critical evaluation of their argument and then point to a further dimension in which the FH has a bearing on the ethics of AI use. My positive contribution is to propose that Kant's Formula of Humanity can contribute to the ethics of LLM use when we focus on the prohibition against treating <i>others</i> as mere means. Kant's practical philosophy is in an excellent position to capture both the danger of treating others as mere means and to account for the value of being a means. AI may endanger both: the status of some workers as ends rather than mere means and the ability of workers to function as ends on their own terms.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"39 2","pages":"90"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13134972/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147821864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Does Accountability Require Agency? Comment on Responsibility and Accountability in the Algorithmic Society.","authors":"Tillmann Vierkant","doi":"10.1007/s13347-025-01014-z","DOIUrl":"10.1007/s13347-025-01014-z","url":null,"abstract":"<p><p>In their intriguing paper <i>Responsibility and Accountability in an Algorithmic Society (2025)</i> the authors argue that the debate on how to deal with responsibility related issues with algorithmic agents requires a distinction between responsibility and accountability. In this comment to their paper, it is argued that while the notion of accountability as understood by the authors brings some significant benefits it also is ambiguous in an important way. Accountability could be understood as being purely instrumental with regard to general morally desirable consequences or it could be understood as necessarily containing an element of scaffolding for the agent who is held to account. The comment develops the options and discusses the consequences of choosing either of them.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"39 1","pages":"13"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12795937/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145971231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philosophy and TechnologyPub Date : 2025-10-18eCollection Date: 2025-12-01DOI: 10.1007/s13347-025-00978-2
Christopher Register, Maryam Ali Khan, Alberto Giubilini, Brian David Earp, Julian Savulescu
{"title":"Privacy and Human-AI Relationships.","authors":"Christopher Register, Maryam Ali Khan, Alberto Giubilini, Brian David Earp, Julian Savulescu","doi":"10.1007/s13347-025-00978-2","DOIUrl":"10.1007/s13347-025-00978-2","url":null,"abstract":"<p><p>Artificial intelligence (AI) agents such as chatbots and personal AI assistants are increasingly popular. These technologies raise new privacy concerns beyond those posed by other AI systems or information technologies. For example, anthropomorphic features of AI chatbots may invite users to disclose more information with these systems than they would otherwise, especially when users interact with chatbots in relationship-like ways. In this paper, we aim to develop a framework for assessing the distinctive privacy ramifications of AI agents, especially as humans begin to interact with them in relationship-like ways. In particular, we draw from prominent theories of privacy and results from human relational psychology to better understand how AI agents may affect human behavior and the flow of personal information. We then assess how these effects could bear on eight distinct values of privacy, such as autonomy, the value of forming and maintaining relationships, security from harm, and more.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7618715/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Vertical Technologies and Relational Values: Rethinking Ethics of Technology in an Age of Extractivism.","authors":"Jeroen Hopster","doi":"10.1007/s13347-025-00962-w","DOIUrl":"10.1007/s13347-025-00962-w","url":null,"abstract":"<p><p>Critical reflection on the material, environmental, and social conditions underlying technology remains peripheral to the field of technology ethics. In this commentary, I underwrite the diagnosis by Vandemeulebroucke et al. (2025) that the field suffers from an \"extractivist blindspot\", but propose a somewhat different cure. First, rather than focusing on the material ontogenesis of technical artefacts, a more radical turn away from artefacts is called for, towards layered socio-technical systems as the field's core object of analysis. Second, notwithstanding the merits of their intercultural proposal, I argue that in overcoming extractivism the conceptual resources of more adjacent philosophical traditions should not be overlooked.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 3","pages":"124"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12398441/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144973035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human Life as <i>Terra Nullius</i>: Socially Blind Engineering in Facebook's Foundational Technologies.","authors":"João C Magalhães, Nick Couldry","doi":"10.1007/s13347-025-00971-9","DOIUrl":"10.1007/s13347-025-00971-9","url":null,"abstract":"<p><p>Critical platform scholars have long suggested, if indirectly, that social media power is somehow akin to social engineering. This article argues that the parallel is analytically productive, but for reasons that are more complex than has previously been appreciated. By examining Facebook's foundational technologies, as described in patents that sought to protect the company's early innovations, we argue that, unlike previous technocratic attempts to reconstruct society, the platform's equally consequential rendering of social reality into a legible and controllable social graph involved no substantive vision of the social world at all. Rather, the company engaged in a form of <i>socially blind engineering</i>, misrecognizing the actual social world as a <i>terra nullius</i>, as if it had no inhabitants who needed to be taken into account, and so was a domain from which profit could be extracted with relative impunity. In so doing, we develop a conceptual vocabulary to understand the widely-criticised recklessness that, notwithstanding some more charitable recent readings, marked the early Facebook - and that might still influence the tech sector as a whole.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 4","pages":"140"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12521260/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145309550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What Will Happen to Humanity in a Million Years? Gilbert Hottois and the Temporality of Technoscience.","authors":"Massimiliano Simons","doi":"10.1007/s13347-025-00887-4","DOIUrl":"10.1007/s13347-025-00887-4","url":null,"abstract":"<p><p>This article provides an overview of the philosophy of Gilbert Hottois, who is usually credited with popularizing the concept of technoscience. Hottois starts from a metaphilosophy of language that diagnoses twentieth-century philosophy as fixated on language at the expense of technology. As an alternative, he developed a philosophy of technoscience that reinterprets science as primarily an intervening and technical activity rather than a contemplative and theoretical one. As I will argue, Hottois articulates the nature of this technicity through a philosophy of time, reflecting on the specific temporality of technoscience as distinct from human history. This temporality of technoscience provoked the need for ethical reflection, since technoscience is constantly changing and transforming the world. This led to Hottois's engagement with bioethics, in which he sought to develop a framework capable of \"guiding\" technoscience. Aiming to avoid both total symbolic closure and total technical openness, this guidance is concerned with the preservation of diversity, especially the human capacity for ethics, ethicity. This idea of guidance was later taken up by Dutch philosophers such as Hans Achterhuis and Peter-Paul Verbeek, inspiring their empirical turn in the philosophy of technology. What remains missing in this framework, however, is Hottois's critical analysis of the different temporalities at work in technology and culture.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 2","pages":"58"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12041151/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144054891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Can AI Rely on the Systematicity of Truth? The Challenge of Modelling Normative Domains.","authors":"Matthieu Queloz","doi":"10.1007/s13347-025-00864-x","DOIUrl":"10.1007/s13347-025-00864-x","url":null,"abstract":"<p><p>A key assumption fuelling optimism about the progress of Large Language Models (LLMs) in accurately and comprehensively modelling the world is that the truth is <i>systematic</i>: true statements about the world form a whole that is not just <i>consistent</i>, in that it contains no contradictions, but <i>coherent</i>, in that the truths are inferentially interlinked. This holds out the prospect that LLMs might in principle rely on that systematicity to fill in gaps and correct inaccuracies in the training data: consistency and coherence promise to facilitate progress towards <i>comprehensiveness</i> in an LLM's representation of the world. However, philosophers have identified compelling reasons to doubt that the truth is systematic across all domains of thought, arguing that in normative domains, in particular, the truth is largely asystematic. I argue that insofar as the truth in normative domains is asystematic, this renders it correspondingly harder for LLMs to make progress, because they cannot then leverage the systematicity of truth. And the less LLMs can rely on the systematicity of truth, the less we can rely on them to do our practical deliberation for us, because the very asystematicity of normative domains requires human agency to play a greater role in practical thought.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 1","pages":"34"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11906541/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143650431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Three Social Dimensions of Chatbot Technology.","authors":"Mauricio Figueroa-Torres","doi":"10.1007/s13347-024-00826-9","DOIUrl":"10.1007/s13347-024-00826-9","url":null,"abstract":"<p><p>The development and deployment of chatbot technology, while spanning decades and employing different techniques, require innovative frameworks to understand and interrogate their functionality and implications. A mere technocentric account of the evolution of chatbot technology does not fully illuminate how conversational systems are embedded in societal dynamics. This study presents a structured examination of chatbots across three societal dimensions, highlighting their roles as objects of scientific research, commercial instruments, and agents of intimate interaction. Through furnishing a dimensional framework for the evolution of conversational systems - from laboratories to marketplaces to private lives- this article contributes to the wider scholarly inquiry of chatbot technology and its impact in lived human experiences and dynamics.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12234634/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144601836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}