Philosophy & TechnologyPub Date : 2021-01-01Epub Date: 2020-05-15DOI: 10.1007/s13347-020-00401-y
Michael Klenk
{"title":"How Do Technological Artefacts Embody Moral Values?","authors":"Michael Klenk","doi":"10.1007/s13347-020-00401-y","DOIUrl":"https://doi.org/10.1007/s13347-020-00401-y","url":null,"abstract":"<p><p>According to some philosophers of technology, technology embodies moral values in virtue of its functional properties and the intentions of its designers. But this paper shows that such an account makes the values supposedly embedded in technology epistemically opaque and that it does not allow for values to change. Therefore, to overcome these shortcomings, the paper introduces the novel Affordance Account of Value Embedding as a superior alternative. Accordingly, artefacts bear affordances, that is, artefacts make certain actions likelier given the circumstances. Based on an interdisciplinary perspective that invokes recent moral anthropology, I conceptualize affordances as response-dependent properties. That is, they depend on intrinsic as well as extrinsic properties of the artefact. We have reason to value these properties. Therefore, artefacts embody values and are not value-neutral, which has practical implications for the design of new technologies.</p>","PeriodicalId":513391,"journal":{"name":"Philosophy & Technology","volume":"34 3","pages":"525-544"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s13347-020-00401-y","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39846712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philosophy & TechnologyPub Date : 2021-01-01Epub Date: 2021-03-21DOI: 10.1007/s13347-021-00445-8
Maurice D Mulvenna, Raymond Bond, Jack Delaney, Fatema Mustansir Dawoodbhoy, Jennifer Boger, Courtney Potts, Robin Turkington
{"title":"Ethical Issues in Democratizing Digital Phenotypes and Machine Learning in the Next Generation of Digital Health Technologies.","authors":"Maurice D Mulvenna, Raymond Bond, Jack Delaney, Fatema Mustansir Dawoodbhoy, Jennifer Boger, Courtney Potts, Robin Turkington","doi":"10.1007/s13347-021-00445-8","DOIUrl":"https://doi.org/10.1007/s13347-021-00445-8","url":null,"abstract":"<p><p>Digital phenotyping is the term given to the capturing and use of user log data from health and wellbeing technologies used in apps and cloud-based services. This paper explores ethical issues in making use of digital phenotype data in the arena of digital health interventions. Products and services based on digital wellbeing technologies typically include mobile device apps as well as browser-based apps to a lesser extent, and can include telephony-based services, text-based chatbots, and voice-activated chatbots. Many of these digital products and services are simultaneously available across many channels in order to maximize availability for users. Digital wellbeing technologies offer useful methods for real-time data capture of the interactions of users with the products and services. It is possible to design what data are recorded, how and where it may be stored, and, crucially, how it can be analyzed to reveal individual or collective usage patterns. The paper also examines digital phenotyping workflows, before enumerating the ethical concerns pertaining to different types of digital phenotype data, highlighting ethical considerations for collection, storage, and use of the data. A case study of a digital health app is used to illustrate the ethical issues. The case study explores the issues from a perspective of data prospecting and subsequent machine learning. The ethical use of machine learning and artificial intelligence on digital phenotype data and the broader issues in democratizing machine learning and artificial intelligence for digital phenotype data are then explored in detail.</p>","PeriodicalId":513391,"journal":{"name":"Philosophy & Technology","volume":"34 4","pages":"1945-1960"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s13347-021-00445-8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25525517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philosophy & TechnologyPub Date : 2021-01-01Epub Date: 2021-03-08DOI: 10.1007/s13347-021-00446-7
Luciano Floridi
{"title":"Trump, Parler, and Regulating the Infosphere as Our Commons.","authors":"Luciano Floridi","doi":"10.1007/s13347-021-00446-7","DOIUrl":"https://doi.org/10.1007/s13347-021-00446-7","url":null,"abstract":"","PeriodicalId":513391,"journal":{"name":"Philosophy & Technology","volume":"34 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s13347-021-00446-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25487256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philosophy & TechnologyPub Date : 2021-01-01Epub Date: 2020-01-22DOI: 10.1007/s13347-019-00389-0
Hannah Maslen, Stephen Rainey
{"title":"Control and Ownership of Neuroprosthetic Speech.","authors":"Hannah Maslen, Stephen Rainey","doi":"10.1007/s13347-019-00389-0","DOIUrl":"10.1007/s13347-019-00389-0","url":null,"abstract":"<p><p>Implantable brain-computer interfaces (BCIs) are being developed to restore speech capacity for those who are unable to speak. Patients with locked-in syndrome or amyotrophic lateral sclerosis could be able to use covert speech - vividly imagining saying something without actual vocalisation - to trigger neural controlled systems capable of synthesising speech. User control has been identified as particularly pressing for this type of BCI. The incorporation of machine learning and statistical language models into the decoding process introduces a contribution to (or 'shaping of') the output that is beyond the user's control. Whilst this type of 'shared control' of BCI action is not unique to speech BCIs, the automated shaping of what a user 'says' has a particularly acute ethical dimension, which may differ from parallel concerns surrounding automation in movement BCIs. This paper provides an analysis of the control afforded to the user of a speech BCI of the sort under development, as well as the relationships between <i>accuracy</i>, <i>control,</i> and the user's <i>ownership</i> of the speech produced. Through comparing speech BCIs with BCIs for movement, we argue that, whilst goal selection is the more significant locus of control for the user of a movement BCI, control over process will be more significant for the user of the speech BCI. The design of the speech BCI may therefore have to trade off some possible efficiency gains afforded by automation in order to preserve sufficient guidance control necessary for users to express themselves in ways they prefer. We consider the implications for the speech BCI user's <i>responsibility for</i> produced outputs and their <i>ownership of</i> token outputs. We argue that these are distinct assessments. Ownership of synthetic speech concerns whether the content of the output sufficiently represents the user, rather than their morally relevant, causal role in producing that output.</p>","PeriodicalId":513391,"journal":{"name":"Philosophy & Technology","volume":"34 3","pages":"425-445"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8550345/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39832694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philosophy & TechnologyPub Date : 2021-01-01Epub Date: 2021-02-11DOI: 10.1007/s13347-021-00442-x
Elizabeth Stewart
{"title":"Detecting Fake News: Two Problems for Content Moderation.","authors":"Elizabeth Stewart","doi":"10.1007/s13347-021-00442-x","DOIUrl":"https://doi.org/10.1007/s13347-021-00442-x","url":null,"abstract":"<p><p>The spread of fake news online has far reaching implications for the lives of people offline. There is increasing pressure for content sharing platforms to intervene and mitigate the spread of fake news, but intervention spawns accusations of biased censorship. The tension between fair moderation and censorship highlights two related problems that arise in flagging online content as fake or legitimate: firstly, what kind of content counts as a problem such that it should be flagged, and secondly, is it practically and theoretically possible to gather and label instances of such content in an unbiased manner? In this paper, I argue that answering either question involves making value judgements that can generate user distrust toward fact checking efforts.</p>","PeriodicalId":513391,"journal":{"name":"Philosophy & Technology","volume":"34 4","pages":"923-940"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s13347-021-00442-x","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25371713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seán S. ÓhÉigeartaigh, Jess Whittlestone, Yang Liu, Yi Zeng, Zhe Liu
{"title":"Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and Governance","authors":"Seán S. ÓhÉigeartaigh, Jess Whittlestone, Yang Liu, Yi Zeng, Zhe Liu","doi":"10.1007/s13347-020-00402-x","DOIUrl":"https://doi.org/10.1007/s13347-020-00402-x","url":null,"abstract":"","PeriodicalId":513391,"journal":{"name":"Philosophy & Technology","volume":"28 23","pages":"571 - 593"},"PeriodicalIF":0.0,"publicationDate":"2020-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141204988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Artificial Moral Agents Within an Ethos of AI4SG","authors":"Bongani Andy Mabaso","doi":"10.1007/s13347-020-00400-z","DOIUrl":"https://doi.org/10.1007/s13347-020-00400-z","url":null,"abstract":"","PeriodicalId":513391,"journal":{"name":"Philosophy & Technology","volume":"52 6","pages":"7 - 21"},"PeriodicalIF":0.0,"publicationDate":"2020-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141209761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Technological Capital: Bourdieu, Postphenomenology, and the Philosophy of Technology Beyond the Empirical Turn","authors":"Alberto Romele","doi":"10.1007/s13347-020-00398-4","DOIUrl":"https://doi.org/10.1007/s13347-020-00398-4","url":null,"abstract":"","PeriodicalId":513391,"journal":{"name":"Philosophy & Technology","volume":" 20","pages":"483 - 505"},"PeriodicalIF":0.0,"publicationDate":"2020-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141221268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Is Big Data the New Stethoscope? Perils of Digital Phenotyping to Address Mental Illness","authors":"Ş. Tekin","doi":"10.1007/s13347-020-00395-7","DOIUrl":"https://doi.org/10.1007/s13347-020-00395-7","url":null,"abstract":"","PeriodicalId":513391,"journal":{"name":"Philosophy & Technology","volume":"77 s340","pages":"447 - 461"},"PeriodicalIF":0.0,"publicationDate":"2020-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141225409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}