{"title":"Induction Ain’t What It Used to Be","authors":"M. Walker, M. Ćirković","doi":"10.55613/jeet.v30i1.85","DOIUrl":"https://doi.org/10.55613/jeet.v30i1.85","url":null,"abstract":"We argue that, in all probability, the universe will become less predictable. This assertion means that induction, which some scientists conceive of as a tool for predicting the future, will become less useful. Our argument claims that the universe will increasingly come under intentional control, and objects that are under intentional control are typically less predictable than those that are not. We contrast this form of skepticism about induction, \"Skeptical-Dogmatism,\" with David Hume's Pyrrhonian skepticism about induction.","PeriodicalId":157018,"journal":{"name":"Journal of Ethics and Emerging Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129629136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Future Political Framework for Moral Enhancement","authors":"V. Rakić, M. Ćirković","doi":"10.55613/jeet.v30i1.78","DOIUrl":"https://doi.org/10.55613/jeet.v30i1.78","url":null,"abstract":"Various kinds of human bioenhancement represent a major topic of contention in both bioethics and futures studies. Moral enhancement is one of them. It will be argued that voluntary moral bio-enhancement (and other types of moral enhancement) should be based on an opt out moral (bio)enhancement scheme. Such a scheme would avoid the challenges of a voluntary moral (bio)enhancement opt in scheme. The former has a proper place in a minimal state. It will be explained why such a state can be called Utopia. The concept of voluntary opt out moral (bio-)enhancement in Utopia will be highlighted in detail.","PeriodicalId":157018,"journal":{"name":"Journal of Ethics and Emerging Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114425772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Machines and Non-Identity Problems","authors":"Zachary Biondi","doi":"10.55613/jeet.v29i2.74","DOIUrl":"https://doi.org/10.55613/jeet.v29i2.74","url":null,"abstract":"A number of thinkers have been wondering about the moral obligations humans have, or will have, to intelligent technologies. An underlying assumption is that “moral machines” are decades in the offing, and thus we have no pressing obligations now. But, in the context of technology, we are yet to consider that we might owe moral consideration to something that is not a member of the moral community but eventually will be as an outcome of human action. Do we have current actual obligations to technologies that do not currently exist? If there are obligations to currently non-existing technologies, we must confront what might be called the Non-Identical Machines Problem. Can we harm or benefit an entity by making it one way rather than another? This paper presents the problem and argues that it is more challenging than the standard Non-Identity Problem.","PeriodicalId":157018,"journal":{"name":"Journal of Ethics and Emerging Technologies","volume":"188 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124320578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Do No Harm Policy for Minds in Other Substrates","authors":"Soenke Ziesche, Roman V. Yampolskiy","doi":"10.55613/jeet.v29i2.73","DOIUrl":"https://doi.org/10.55613/jeet.v29i2.73","url":null,"abstract":"Various authors have argued that in the future not only will it be technically feasible for human minds to be transferred to other substrates, but this will become, for most humans, the preferred option over the current biological limitations. It has even been claimed that such a scenario is inevitable in order to solve the challenging, but imperative, multi-agent value alignment problem. In all these considerations, it has been overlooked that, in order to create a suitable environment for a particular mind – for example, a personal universe in a computational substrate – numerous other potentially sentient beings will have to be created. These range from non-player characters to subroutines. This article analyzes the additional suffering and mind crimes that these scenarios might entail. We offer a partial solution to reduce the suffering by imposing on the transferred mind the perception of indicators to measure potential suffering in non-player characters. This approach can be seen as implementing literal empathy through enhanced cognition.","PeriodicalId":157018,"journal":{"name":"Journal of Ethics and Emerging Technologies","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117119290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Book review: Being Ecological by Timothy Morton","authors":"Steven Umbrello","doi":"10.55613/jeet.v29i1.72","DOIUrl":"https://doi.org/10.55613/jeet.v29i1.72","url":null,"abstract":"From its opening page, Being Ecological (MIT Press, 2018; all page references to this edition) seems to situate itself as an ecological text of an unusual kind, stating that it does not aim to guilt its readers into ecological angst with weighty factoids and the information-dump approach, or “ecological information delivery mode” (p. 7), so often adopted by other authors. Timothy Morton, notorious for his ability to invert commonly held beliefs and understandings within the humanities, presents Being Ecological as his attempt to arrive at a more authentic and productive understanding of what he has called elsewhere the ecological thought and how to live with it (Morton 2012), rather than trying to guilt-trip us into ecology.","PeriodicalId":157018,"journal":{"name":"Journal of Ethics and Emerging Technologies","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126604119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Can we make wise decisions to modify ourselves?","authors":"R. Martens","doi":"10.55613/jeet.v29i1.71","DOIUrl":"https://doi.org/10.55613/jeet.v29i1.71","url":null,"abstract":"Much of the human enhancement literature focuses on the ethical, social, and political challenges we are likely to face in the future. I will focus instead on whether we can make decisions to modify ourselves that are known to be likely to satisfy our preferences. It seems plausible to suppose that, if a subject is deciding whether to select a reasonably safe and morally unproblematic enhancement, the decision will be an easy one. The subject will simply figure out her preferences and decide accordingly. The problem, however, is that there is substantial evidence that we are not very good at predicting what will satisfy our preferences. This is a general problem that applies to many different types of decisions, but I argue that there are additional complications when it comes to making decisions about enhancing ourselves. These arise not only for people interested in selecting enhancements but also for people who choose to abstain.","PeriodicalId":157018,"journal":{"name":"Journal of Ethics and Emerging Technologies","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126282366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Forever and Again","authors":"Alexey Turchin","doi":"10.55613/jeet.v28i1.70","DOIUrl":"https://doi.org/10.55613/jeet.v28i1.70","url":null,"abstract":"This article explores theoretical conditions necessary for “quantum immortality” (QI) as well as its possible practical implications. It is demonstrated that QI is a particular case of “multiverse immortality” (MI), which is based on two main assumptions: the very large size of the universe (not necessarily because of quantum effects); and a copy-friendly theory of personal identity. It is shown that a popular objection about lowering of the world-share (measure) of an observer in the case of QI does not succeed, as the world-share decline could be compensated by merging timelines for the simpler minds, and because some types of personal preferences are not dependent on such changes. Despite large uncertainty about the truth of MI, it has appreciable practical consequences for some important outcomes like suicide and aging. The article demonstrates that MI could be used to significantly increase the expected subjective probability of success of risky life extension technologies, such as cryonics, but that it makes euthanasia impractical because of the risk of eternal suffering. Euthanasia should be replaced with cryothanasia, i.e. cryopreservation after voluntary death. Another possible application of MI is as a last chance to survive a global catastrophe. MI could be considered a Plan D for reaching immortality, where Plan A consists of survival until the development of beneficial Artificial Intelligence capable of fighting aging, Plan B employs cryonics, and Plan C is digital immortality.","PeriodicalId":157018,"journal":{"name":"Journal of Ethics and Emerging Technologies","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127453712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Value of Consciousness and Free Will in a Technological Dystopia","authors":"A. McCay","doi":"10.55613/jeet.v28i1.69","DOIUrl":"https://doi.org/10.55613/jeet.v28i1.69","url":null,"abstract":"Yuval Noah Harari warns of a very pessimistic future for our species: essentially, that it may be superseded by non-conscious Artificial Intelligence that can do anything we can and more. This assumes that we are physically instantiated algorithms that can be improved on in all respects. On such an assumption, our labor will become economically worthless once AI reaches a certain level. This picture, however, changes markedly if we accept the views of David Hodgson in respect of consciousness, free will, what he calls plausible reasoning, and the relationship among these. On Hodgson’s account, there will always be valuable skills requiring a particular kind of judgment that are possessed by humans, but not by non-conscious algorithmic machines, however advanced.","PeriodicalId":157018,"journal":{"name":"Journal of Ethics and Emerging Technologies","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116426510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Identity, Immortality, Happiness","authors":"S. Edelman","doi":"10.55613/jeet.v28i1.68","DOIUrl":"https://doi.org/10.55613/jeet.v28i1.68","url":null,"abstract":"To the extent that the performance of embodied and situated cognitive agents is predicated on fore- thought, such agents must remember, and learn from, the past to predict the future. In complex, non-stationary environments, such learning is facilitated by an intrinsic motivation to seek novelty. A significant part of an agent’s identity is thus constituted by its remembered distilled cumulative life experience, which the agent is driven to constantly expand. The combination of the drive to novelty with practical limits on memory capacity posits a problem. On the one hand, because novelty seekers are unhappy when bored, merely reliving past positive experiences soon loses its appeal: happiness can only be attained sporadically, via an open-ended pursuit of new experience. On the other hand, because the experiencer’s memory is finite, longevity and continued novelty, taken together, imply eventual loss of at least some of the stored content, and with it a disruption of the constructed identity. In this essay, I examine the biobehavioral and cognitive-computational circumstances that give rise to this problem and explore its implications for the human condition.","PeriodicalId":157018,"journal":{"name":"Journal of Ethics and Emerging Technologies","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122217715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Transhumanist Philosophy of Charles Sanders Peirce","authors":"Aaron B. Wilson, Daniel J. Brunson","doi":"10.55613/jeet.v27i2.67","DOIUrl":"https://doi.org/10.55613/jeet.v27i2.67","url":null,"abstract":"We explain how the work of Charles Sanders Peirce (1839–1914) – the founder of semiotics and of the pragmatist tradition in philosophy – contributes an epistemological, metaphysical, and ethical foundation to some key transhumanist ideas, including the following claims: technological cognitive enhancement is not only possible but a present reality; pursuing more sweeping cognitive enhancements is epistemically rational; and current humans should try to evolve themselves into posthumans. On Peirce’s view, the fundamental aim of inquiry is truth, understood in terms of a stage of ideal cognition (what he calls the “final opinion”). As current human cognitive abilities are insufficient to achieve this stage, Peirce’s views on cognition support a variety of ways in which they might be enhanced. Finally, we argue that what Peirce describes as our ethical summum bonum seems remarkably similar to what Bostrom (2005) argues to be the core transhumanist value: “the exploration of the posthuman realm.”","PeriodicalId":157018,"journal":{"name":"Journal of Ethics and Emerging Technologies","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116841965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}