{"title":"What Will Happen to Humanity in a Million Years? Gilbert Hottois and the Temporality of Technoscience.","authors":"Massimiliano Simons","doi":"10.1007/s13347-025-00887-4","DOIUrl":"https://doi.org/10.1007/s13347-025-00887-4","url":null,"abstract":"<p><p>This article provides an overview of the philosophy of Gilbert Hottois, who is usually credited with popularizing the concept of technoscience. Hottois starts from a metaphilosophy of language that diagnoses twentieth-century philosophy as fixated on language at the expense of technology. As an alternative, he developed a philosophy of technoscience that reinterprets science as primarily an intervening and technical activity rather than a contemplative and theoretical one. As I will argue, Hottois articulates the nature of this technicity through a philosophy of time, reflecting on the specific temporality of technoscience as distinct from human history. This temporality of technoscience provoked the need for ethical reflection, since technoscience is constantly changing and transforming the world. This led to Hottois's engagement with bioethics, in which he sought to develop a framework capable of \"guiding\" technoscience. Aiming to avoid both total symbolic closure and total technical openness, this guidance is concerned with the preservation of diversity, especially the human capacity for ethics, ethicity. This idea of guidance was later taken up by Dutch philosophers such as Hans Achterhuis and Peter-Paul Verbeek, inspiring their empirical turn in the philosophy of technology. What remains missing in this framework, however, is Hottois's critical analysis of the different temporalities at work in technology and culture.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 2","pages":"58"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12041151/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144054891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Can AI Rely on the Systematicity of Truth? The Challenge of Modelling Normative Domains.","authors":"Matthieu Queloz","doi":"10.1007/s13347-025-00864-x","DOIUrl":"https://doi.org/10.1007/s13347-025-00864-x","url":null,"abstract":"<p><p>A key assumption fuelling optimism about the progress of Large Language Models (LLMs) in accurately and comprehensively modelling the world is that the truth is <i>systematic</i>: true statements about the world form a whole that is not just <i>consistent</i>, in that it contains no contradictions, but <i>coherent</i>, in that the truths are inferentially interlinked. This holds out the prospect that LLMs might in principle rely on that systematicity to fill in gaps and correct inaccuracies in the training data: consistency and coherence promise to facilitate progress towards <i>comprehensiveness</i> in an LLM's representation of the world. However, philosophers have identified compelling reasons to doubt that the truth is systematic across all domains of thought, arguing that in normative domains, in particular, the truth is largely asystematic. I argue that insofar as the truth in normative domains is asystematic, this renders it correspondingly harder for LLMs to make progress, because they cannot then leverage the systematicity of truth. And the less LLMs can rely on the systematicity of truth, the less we can rely on them to do our practical deliberation for us, because the very asystematicity of normative domains requires human agency to play a greater role in practical thought.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 1","pages":"34"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11906541/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143650431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What is the Point of Social Media? Corporate Purpose and Digital Democratization.","authors":"Ugur Aytac","doi":"10.1007/s13347-025-00855-y","DOIUrl":"10.1007/s13347-025-00855-y","url":null,"abstract":"<p><p>This paper proposes a new normative framework to think about Big Tech reform. Focusing on the case of digital communication, I argue that rethinking the corporate purpose of social media companies is a distinctive entry point to the debate on how to render the powers of tech corporations democratically legitimate. I contend that we need to strive for a reform that redefines the corporate purpose of social media companies. In this view, their purpose should be to create and maintain a free, egalitarian, and democratic public sphere rather than profit seeking. This political reform democratically contains corporate power in two ways: first, the legally enforceable fiduciary duties of corporate boards are reconceptualized in relation to democratic purposes rather than shareholder interests. Second, corporate governance structures should be redesigned to ensure that the abstract purpose is realized through representatives whose incentives align with the existence of a democratic public sphere. My argument complements radical proposals such as platform socialism by drawing a connection between democratizing social media governance and identifying the proper purpose of social media companies.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 1","pages":"26"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11842518/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143484260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Digital Emotion Detection, Privacy, and the Law.","authors":"Leonhard Menges, Eva Weber-Guskar","doi":"10.1007/s13347-025-00895-4","DOIUrl":"10.1007/s13347-025-00895-4","url":null,"abstract":"<p><p>Intuitively, it seems reasonable to prefer that not everyone knows about all our emotions, for example, who we are in love with, who we are angry with, and what we are ashamed of. Moreover, prominent examples in the philosophical discussion of privacy include emotions. Finally, empirical studies show that a significant number of people in the UK and US are uncomfortable with digital emotion detection. In light of this, it may be surprising to learn that current data protection laws in Europe, which are designed to protect privacy, do not specifically address data about emotions. Understanding and discussing this incongruity is the subject of this paper. We will argue for two main claims: first, that anonymous emotion data does not need special legal protection, and second, that there are very good moral reasons to provide non-anonymous emotion data with special legal protection.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 2","pages":"77"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12106471/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144175189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Designer of A Robot Determines Its Position Within The Moral Circle.","authors":"Kamil Mamak","doi":"10.1007/s13347-025-00898-1","DOIUrl":"10.1007/s13347-025-00898-1","url":null,"abstract":"","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"38 2","pages":"66"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12081538/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144095332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Where Technology Leads, the Problems Follow. Technosolutionism and the Dutch Contact Tracing App.","authors":"Lotje E Siffels, Tamar Sharon","doi":"10.1007/s13347-024-00807-y","DOIUrl":"10.1007/s13347-024-00807-y","url":null,"abstract":"<p><p>In April 2020, in the midst of its first pandemic lockdown, the Dutch government announced plans to develop a contact tracing app to help contain the spread of the coronavirus - the <i>Coronamelder.</i> Originally intended to address the problem of the overburdening of manual contract tracers, by the time the app was released six months later, the problem it sought to solve had drastically changed, without the solution undergoing any modification, making it a prime example of technosolutionism. While numerous critics have mobilised the concept of technosolutionism, the questions of how technosolutionism works in practice and which specific harms it can provoke have been understudied. In this paper we advance a thick conception of technosolutionism which, drawing on Evgeny Morozov, distinguishes it from the notion of technological fix, and, drawing on constructivism, emphasizes its constructivist dimension. Using this concept, we closely follow the problem that the Coronamelder aimed to solve and how it shifted over time to fit the Coronamelder solution, rather than the other way around. We argue that, although problems are always constructed, technosolutionist problems are <i>badly</i> constructed, insofar as the careful and cautious deliberation which should accompany problem construction in public policy is absent in the case of technosolutionism. This can lead to three harms: a subversion of democratic decision-making; the presence of powerful new actors in the public policy context - here Big Tech; and the creation of \"orphan problems\", whereby the initial problems that triggered the need to develop a (techno)solution are left behind. We question whether the most popular form of technology ethics today, which focuses predominantly on the <i>design</i> of technology, is well-equipped to address these technosolutionist harms, insofar as such a focus may preclude critical thinking about whether or not technology should be the solution in the first place.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"37 4","pages":"125"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11519188/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142548147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Track Thyself? The Value and Ethics of Self-knowledge Through Technology.","authors":"Muriel Leuenberger","doi":"10.1007/s13347-024-00704-4","DOIUrl":"10.1007/s13347-024-00704-4","url":null,"abstract":"<p><p>Novel technological devices, applications, and algorithms can provide us with a vast amount of personal information about ourselves. Given that we have ethical and practical reasons to pursue self-knowledge, should we use technology to increase our self-knowledge? And which ethical issues arise from the pursuit of technologically sourced self-knowledge? In this paper, I explore these questions in relation to bioinformation technologies (health and activity trackers, DTC genetic testing, and DTC neurotechnologies) and algorithmic profiling used for recommender systems, targeted advertising, and technologically supported decision-making. First, I distinguish between impersonal, critical, and relational self-knowledge. Relational self-knowledge is a so far neglected dimension of self-knowledge which is introduced in this paper. Next, I investigate the contribution of these technologies to the three types of self-knowledge and uncover the connected ethical concerns. Technology can provide a lot of impersonal self-knowledge, but we should focus on the quality of the information which tends to be particularly insufficient for marginalized groups. In terms of critical self-knowledge, the nature of technologically sourced personal information typically impedes critical engagement. The value of relational self-knowledge speaks in favour of transparency of information technology, notably for algorithms that are involved in decision-making about individuals. Moreover, bioinformation technologies and digital profiling shape the concepts and norms that define us. We should ensure they not only serve commercial interests but our identity and self-knowledge interests.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"37 1","pages":"13"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10821817/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139576841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Moderating Synthetic Content: the Challenge of Generative AI.","authors":"Sarah A Fisher, Jeffrey W Howard, Beatriz Kira","doi":"10.1007/s13347-024-00818-9","DOIUrl":"https://doi.org/10.1007/s13347-024-00818-9","url":null,"abstract":"<p><p>Artificially generated content threatens to seriously disrupt the public sphere. Generative AI massively facilitates the production of convincing portrayals of fabricated events. We have already begun to witness the spread of synthetic misinformation, political propaganda, and non-consensual intimate deepfakes. Malicious uses of the new technologies can only be expected to proliferate over time. In the face of this threat, social media platforms must surely act. But how? While it is tempting to think they need new sui generis policies targeting synthetic content, we argue that the challenge posed by generative AI should be met through the enforcement of general platform rules. We demonstrate that the threat posed to individuals and society by AI-generated content is no different in kind from that of ordinary harmful content-a threat which is already well recognised. Generative AI massively increases the problem but, ultimately, it requires the same approach. Therefore, platforms do best to double down on improving and enforcing their existing rules, regardless of whether the content they are dealing with was produced by humans or machines.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"37 4","pages":"133"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11561028/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142649217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Incalculability of the Generated Text.","authors":"Alžbeta Kuchtová","doi":"10.1007/s13347-024-00708-0","DOIUrl":"10.1007/s13347-024-00708-0","url":null,"abstract":"<p><p>In this paper, I explore Derrida's concept of exteriorization in relation to texts generated by machine learning. I first discuss Heidegger's view of machine creation and then present Derrida's criticism of Heidegger. I explain the concept of iterability, which is the central notion on which Derrida's criticism is based. The thesis defended in the paper is that Derrida's account of iterability provides a helpful framework for understanding the phenomenon of machine learning-generated literature. His account of textuality highlights the incalculability and mechanical elements characteristic of all texts, including machine-generated texts. By applying Derrida's concept to the phenomenon of machine creation, we can deconstruct the distinction between human and non-human creation. As I propose in the conclusion to this paper, this provides a basis on which to consider potential positive uses of machine learning.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"37 1","pages":"25"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10874339/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139906570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
René van Woudenberg, Chris Ranalli, Daniel Bracker
{"title":"Authorship and ChatGPT: a Conservative View.","authors":"René van Woudenberg, Chris Ranalli, Daniel Bracker","doi":"10.1007/s13347-024-00715-1","DOIUrl":"10.1007/s13347-024-00715-1","url":null,"abstract":"<p><p>Is ChatGPT an author? Given its capacity to generate something that reads like human-written text in response to prompts, it might seem natural to ascribe authorship to ChatGPT. However, we argue that ChatGPT is not an author. ChatGPT fails to meet the criteria of authorship because it lacks the ability to perform illocutionary speech acts such as promising or asserting, lacks the fitting mental states like knowledge, belief, or intention, and cannot take responsibility for the texts it produces. Three perspectives are compared: liberalism (which ascribes authorship to ChatGPT), conservatism (which denies ChatGPT's authorship for normative and metaphysical reasons), and moderatism (which treats ChatGPT as if it possesses authorship without committing to the existence of mental states like knowledge, belief, or intention). We conclude that conservatism provides a more nuanced understanding of authorship in AI than liberalism and moderatism, without denying the significant potential, influence, or utility of AI technologies such as ChatGPT.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"37 1","pages":"34"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10896910/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139991438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}