Journal of Ethics and Emerging Technologies最新文献

筛选
英文 中文
The Concept of the Posthuman 后人类的概念
Journal of Ethics and Emerging Technologies Pub Date : 2016-07-01 DOI: 10.55613/jeet.v26i2.57
D. J. Wennemann
{"title":"The Concept of the Posthuman","authors":"D. J. Wennemann","doi":"10.55613/jeet.v26i2.57","DOIUrl":"https://doi.org/10.55613/jeet.v26i2.57","url":null,"abstract":"A central task in understanding the theme of the posthuman involves relating it to the concept of the human. For some, there is continuity between the concepts of the human and the posthuman. This approach can be understood in the tradition of the great chain of being. Another approach posits a conceptual, and perhaps ontological, saltus (μετάβασις εἰς ἄλλο γένος). Here, the concept of the posthuman is taken to represent a radical departure from the realm of the human. After considering Lovejoy’s scheme of the great chain of being, Aristotle’s view of a conceptual saltus (μετάβασις εἰς ἄλλο γένος), and their historical significance, I will suggest how we might distinguish various concepts of the posthuman from the human by applying Rudolf Carnap’s approach to defining multiple concepts of space. We can thus create a linguistic convention that will assist in constructing useful conceptions of the human and posthuman – these can clarify the prospects of a posthuman future.","PeriodicalId":157018,"journal":{"name":"Journal of Ethics and Emerging Technologies","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125051650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Ethics of Exponential Life Extension through Brain Preservation 通过大脑保存指数延长生命的伦理学
Journal of Ethics and Emerging Technologies Pub Date : 2016-03-01 DOI: 10.55613/jeet.v26i1.54
M. Cerullo
{"title":"The Ethics of Exponential Life Extension through Brain Preservation","authors":"M. Cerullo","doi":"10.55613/jeet.v26i1.54","DOIUrl":"https://doi.org/10.55613/jeet.v26i1.54","url":null,"abstract":"Chemical brain preservation allows the brain to be preserved for millennia. In the coming decades, the information in a chemically preserved brain may be able to be decoded and emulated in a computer. I first examine the history of brain preservation and recent advances that indicate this may soon be a real possibility. I then argue that chemical brain preservation should be viewed as a life-saving medical procedure. Any technology that significantly extends the human life span faces many potential criticisms. However, standard medical ethics entails that individuals should have the autonomy to choose chemical brain preservation. Only if the harm to society caused by brain preservation and future emulation greatly outweighed any potential benefit would it be ethically acceptable to refuse individuals this medical intervention. Since no such harm exists, it is ethical for individuals to choose chemical brain preservation.","PeriodicalId":157018,"journal":{"name":"Journal of Ethics and Emerging Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131997014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
The Rise of Social Robots 社交机器人的兴起
Journal of Ethics and Emerging Technologies Pub Date : 2016-03-01 DOI: 10.55613/jeet.v26i1.55
Riccardo Campa
{"title":"The Rise of Social Robots","authors":"Riccardo Campa","doi":"10.55613/jeet.v26i1.55","DOIUrl":"https://doi.org/10.55613/jeet.v26i1.55","url":null,"abstract":"In this article I explore the most recent literature on social robotics and argue that the field of robotics is evolving in a direction that will soon require a systematic collaboration between engineers and sociologists. After discussing several problems relating to social robotics, I emphasize that two key concepts in this research area are scenario and persona. These are already popular as design tools in Human-Computer Interaction (HCI), and an approach based on them is now being adopted in Human-Robot Interaction (HRI). As robots become more and more sophisticated, engineers will need the help of trained sociologists and psychologists in order to create personas and scenarios and to “teach” humanoids how to behave in various circumstances.","PeriodicalId":157018,"journal":{"name":"Journal of Ethics and Emerging Technologies","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114327621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Don’t Worry about Superintelligence 不要担心超级智能
Journal of Ethics and Emerging Technologies Pub Date : 2016-02-01 DOI: 10.55613/jeet.v26i1.52
N. Agar
{"title":"Don’t Worry about Superintelligence","authors":"N. Agar","doi":"10.55613/jeet.v26i1.52","DOIUrl":"https://doi.org/10.55613/jeet.v26i1.52","url":null,"abstract":"This paper responds to Nick Bostrom’s suggestion that the threat of a human-unfriendly superintelligence should lead us to delay or rethink progress in AI. I allow that progress in AI presents problems that we are currently unable to solve. However, we should distinguish between currently unsolved problems for which there are rational expectations of solutions and currently unsolved problems for which no such expectation is appropriate. The problem of a human-unfriendly superintelligence belongs to the first category. It is rational to proceed on that assumption that we will solve it. These observations do not reduce to zero the existential threat from superintelligence. But we should not permit fear of very improbable negative outcomes to delay the arrival of the expected benefits from AI.","PeriodicalId":157018,"journal":{"name":"Journal of Ethics and Emerging Technologies","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125317009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Infusing Advanced AGIs with Human-Like Value Systems 为先进的人工智能系统注入类似人类的价值体系
Journal of Ethics and Emerging Technologies Pub Date : 2016-02-01 DOI: 10.55613/jeet.v26i1.51
B. Goertzel
{"title":"Infusing Advanced AGIs with Human-Like Value Systems","authors":"B. Goertzel","doi":"10.55613/jeet.v26i1.51","DOIUrl":"https://doi.org/10.55613/jeet.v26i1.51","url":null,"abstract":"Two theses are proposed, regarding the future evolution of the value systems of advanced AGI systems. The Value Learning Thesis is a semi-formalized version of the idea that, if an AGI system is taught human values in an interactive and experiential way as its intelligence increases toward human level, it will likely adopt these human values in a genuine way. The Value Evolution Thesis is a semi-formalized version of the idea that if an AGI system begins with human-like values, and then iteratively modifies itself, it will end up in roughly the same future states as a population of human beings engaged with progressively increasing their own intelligence (e.g. by cyborgification or brain modification). Taken together, these theses suggest a worldview in which raising young AGIs to have human-like values is a sensible thing to do, and likely to produce a future that is generally desirable in a human sense. \u0000While these two theses are far from definitively proven, I argue that they are more solid and more relevant to the actual future of AGI than Bostrom’s “Instrumental Convergence Thesis” and “Orthogonality Thesis” which are core to the basis of his argument (in his book Superintelligence) for fearing ongoing AGI development and placing AGI R&D under strict governmental control. In the context of fleshing out this argument, previous publications and discussions by Richard Loosemore and Kaj Sotala are discussed in some detail.","PeriodicalId":157018,"journal":{"name":"Journal of Ethics and Emerging Technologies","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129534275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Geoengineering 地球工程
Journal of Ethics and Emerging Technologies Pub Date : 2016-02-01 DOI: 10.55613/jeet.v26i1.50
A. Lockley
{"title":"Geoengineering","authors":"A. Lockley","doi":"10.55613/jeet.v26i1.50","DOIUrl":"https://doi.org/10.55613/jeet.v26i1.50","url":null,"abstract":"Geoengineering, specifically Solar Radiation Management (SRM), has been proposed to effect rapid influence over the Earth’s climate system in order to counteract Anthropogenic Global Warming. This poses near-term to long-term governance challenges, some of which are within the planning horizon of current political administrations. Previous discussions of governance of SRM (in both academic and general literature) have focused primarily on two scenarios: an isolated “Greenfinger” individual, or state, acting independently (perhaps in defiance of international opinion); versus more consensual, internationalist approaches. I argue that these models represent a very limited sub-set of plausible deployment scenarios. To generate a range of alternative models, I offer a short, relatively unstructured discussion of a range of different types of warfare – each with an analogous SRM deployment regime.","PeriodicalId":157018,"journal":{"name":"Journal of Ethics and Emerging Technologies","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130115912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Stoic Sage 3.0 斯多葛圣人3.0
Journal of Ethics and Emerging Technologies Pub Date : 2016-02-01 DOI: 10.55613/jeet.v26i1.53
S. Sorgner
{"title":"The Stoic Sage 3.0","authors":"S. Sorgner","doi":"10.55613/jeet.v26i1.53","DOIUrl":"https://doi.org/10.55613/jeet.v26i1.53","url":null,"abstract":"I propose to show that any direct moral bioenhancement procedures that could be realized within a relatively short period of time are not realistic options. This does not have to worry us, however, because alternative options for promoting morality are available. Consequently, moral bioenhancement is not an option for dealing successfully with the increased potential destructiveness of contemporary technologies within a short-term framework, i.e. within this century. In what follows, I will explain why this is the case, and why, contrary to Ingmar Persson and Julian Savulescu, I think this need not worry us too much. In section 1., I will critically analyze moral bioenhancement by means of citalopram, aimed at reducing the tendency to harm others directly. I give this prominence because it could be a practical option for altering human emotions and dispositions – to my mind, it seems the most promising practical option in this field. In later sections, I will consider other means of realizing moral bioenhancement, since the first option does not do the job it is supposed to, and because Persson and Savulescu are concerned with alternative approaches.","PeriodicalId":157018,"journal":{"name":"Journal of Ethics and Emerging Technologies","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122979313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human Connectome Mapping and Monitoring Using Neuronanorobots 利用神经机器人绘制和监测人类连接组
Journal of Ethics and Emerging Technologies Pub Date : 2016-01-01 DOI: 10.55613/jeet.v26i1.49
Nuno R. B. Martins, W. Erlhagen
{"title":"Human Connectome Mapping and Monitoring Using Neuronanorobots","authors":"Nuno R. B. Martins, W. Erlhagen","doi":"10.55613/jeet.v26i1.49","DOIUrl":"https://doi.org/10.55613/jeet.v26i1.49","url":null,"abstract":"Neuronanorobotics is the application of medical nanorobots to the human brain. This paper proposes three specific classes of neuronanorobots, named endoneurobots, gliabots and synaptobots, which together can non-destructively map and monitor the structural changes occurring on the 86 x 109 neurons and the 2.42 x 1014 synapses in the human brain, while also recording the synaptic-processed 4.31 x 1015 spikes/sec carrying electrical functional information processed in the neuronal and synaptic network.","PeriodicalId":157018,"journal":{"name":"Journal of Ethics and Emerging Technologies","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124955352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Superintelligence: Fears, Promises and Potentials 超级智能:恐惧、承诺和潜力
Journal of Ethics and Emerging Technologies Pub Date : 2015-12-01 DOI: 10.55613/jeet.v25i2.48
B. Goertzel
{"title":"Superintelligence: Fears, Promises and Potentials","authors":"B. Goertzel","doi":"10.55613/jeet.v25i2.48","DOIUrl":"https://doi.org/10.55613/jeet.v25i2.48","url":null,"abstract":"Oxford philosopher Nick Bostrom, in his recent and celebrated book Superintelligence, argues that advanced AI poses a potentially major existential risk to humanity, and that advanced AI development should be heavily regulated and perhaps even restricted to a small set of government-approved researchers. Bostrom’s ideas and arguments are reviewed and explored in detail, and compared with the thinking of three other current thinkers on the nature and implications of AI: Eliezer Yudkowsky of the Machine Intelligence Research Institute (formerly Singularity Institute for AI), and David Weinbaum (Weaver) and Viktoras Veitas of the Global Brain Institute. \u0000  \u0000Relevant portions of Yudkowsky’s book Rationality: From AI to Zombies are briefly reviewed, and it is found that nearly all the core ideas of Bostrom’s work appeared previously or concurrently in Yudkowsky’s thinking. However, Yudkowsky often presents these shared ideas in a more plain-spoken and extreme form, making clearer the essence of what is being claimed. For instance, the elitist strain of thinking that one sees in the background in Bostrom is plainly and openly articulated in Yudkowsky, with many of the same practical conclusions (e.g. that it may well be best if advanced AI is developed in secret by a small elite group). \u0000  \u0000Bostrom and Yudkowsky view intelligent systems through the lens of reinforcement learning – they view them as “reward-maximizers” and worry about what happens when a very powerful and intelligent reward-maximizer is paired with a goal system that gives rewards for achieving foolish goals like tiling the universe with paperclips. Weinbaum and Veitas’s recent paper “Open-Ended Intelligence” presents a starkly alternative perspective on intelligence, viewing it as centered not on reward maximization, but rather on complex self-organization and self-transcending development that occurs in close coupling with a complex environment that is also ongoingly self-organizing, in only partially knowable ways. \u0000  \u0000It is concluded that Bostrom and Yudkowsky’s arguments for existential risk have some logical foundation, but are often presented in an exaggerated way. For instance, formal arguments whose implication is that the “worst case scenarios” for advanced AI development are extremely dire, are often informally discussed as if they demonstrated the likelihood, rather than just the possibility, of highly negative outcomes. And potential dangers of reward-maximizing AI are taken as problems with AI in general, rather than just as problems of the reward-maximization paradigm as an approach to building superintelligence. If one views past, current, and future intelligence as “open-ended,” in the vernacular of Weaver and Veitas, the potential dangers no longer appear to loom so large, and one sees a future that is wide-open, complex and uncertain, just as it has always been.","PeriodicalId":157018,"journal":{"name":"Journal of Ethics and Emerging Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126242124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Reframing Ethical Theory, Pedagogy, and Legislation to Bias Open Source AGI Towards Friendliness and Wisdom 重构伦理理论、教学和立法,使开源AGI偏向友好和智慧
Journal of Ethics and Emerging Technologies Pub Date : 2015-11-01 DOI: 10.55613/jeet.v25i2.47
J. Cox
{"title":"Reframing Ethical Theory, Pedagogy, and Legislation to Bias Open Source AGI Towards Friendliness and Wisdom","authors":"J. Cox","doi":"10.55613/jeet.v25i2.47","DOIUrl":"https://doi.org/10.55613/jeet.v25i2.47","url":null,"abstract":"Hopes for biasing the odds towards the development of AGI that is human-friendly depend on finding and employing ethical theories and practices that can be incorporated successfully in the construction, programming and/or developmental growth, education and mature life world of future AGI. Mainstream ethical theories are ill-adapted for this purpose because of their mono-logical decision procedures which aim at “Golden rule” style principles and judgments which are objective in the sense of being universal and absolute. A much more helpful framework for ethics is provided by a dialogical approach using conflict resolution and negotiation methods, a “Rainbow rule” approach to diversity, and a notion of objectivity as emergent impartiality. This conflict resolution approach will also improve our chances in dealing with two other problems related to the “Friendly AI” problem, the difficulty of programming AI to be not merely smarter but genuinely wiser and the dilemmas that arise in considering whether AGIs will be Friendly to humans out of mere partisanship or out of genuine intent to promote the Good. While these issues are challenging, a strategy for pursuing and promoting research on them can be articulated and basic legislation and corporate policies can be adopted to encourage their development as part of the project of biasing the odds in favor of Friendly and Wise AGI.","PeriodicalId":157018,"journal":{"name":"Journal of Ethics and Emerging Technologies","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132365721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信