Akratic机器人及其计算逻辑

S. Bringsjord, Naveen Sundar G., Daniel P. Thero, Mei Si
{"title":"Akratic机器人及其计算逻辑","authors":"S. Bringsjord, Naveen Sundar G., Daniel P. Thero, Mei Si","doi":"10.1109/ETHICS.2014.6893436","DOIUrl":null,"url":null,"abstract":"Alas, there are akratic persons. We know this from the human case, and our knowledge is nothing new, since for instance Plato analyzed rather long ago a phenomenon all human persons, at one point or another, experience: (1) Jones knows that he ought not to - say - drink to the point of passing out, (2) earnestly desires that he not imbibe to this point, but (3) nonetheless (in the pleasant, seductive company of his fun and hard-drinking buddies) slips into a series of decisions to have highball upon highball, until collapse.1 Now; could a robot suffer from akrasia? Thankfully, no: only persons can be plagued by this disease (since only persons can have full-blown P-consciousness2, and robots can't be persons (Bringsjord 1992). But could a robot be afflicted by a purely - to follow Pollock (1995) - “intellectual” version of akrasia? Yes, and for robots collaborating with American human soldiers, even this version, in warfare, isn't a savory prospect: A robot that knows it ought not to torture or execute enemy prisoners in order to exact revenge, desires to refrain from firing upon them, but nonetheless slips into a decision to ruthlessly do so - well, this is probably not the kind of robot the U.S. military is keen on deploying. Unfortunately, for reasons explained below, unless the engineering we recommend is supported and deployed, this might well be the kind of robot that our future holds.","PeriodicalId":101738,"journal":{"name":"2014 IEEE International Symposium on Ethics in Science, Technology and Engineering","volume":"130 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"28","resultStr":"{\"title\":\"Akratic robots and the computational logic thereof\",\"authors\":\"S. Bringsjord, Naveen Sundar G., Daniel P. Thero, Mei Si\",\"doi\":\"10.1109/ETHICS.2014.6893436\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Alas, there are akratic persons. We know this from the human case, and our knowledge is nothing new, since for instance Plato analyzed rather long ago a phenomenon all human persons, at one point or another, experience: (1) Jones knows that he ought not to - say - drink to the point of passing out, (2) earnestly desires that he not imbibe to this point, but (3) nonetheless (in the pleasant, seductive company of his fun and hard-drinking buddies) slips into a series of decisions to have highball upon highball, until collapse.1 Now; could a robot suffer from akrasia? Thankfully, no: only persons can be plagued by this disease (since only persons can have full-blown P-consciousness2, and robots can't be persons (Bringsjord 1992). But could a robot be afflicted by a purely - to follow Pollock (1995) - “intellectual” version of akrasia? Yes, and for robots collaborating with American human soldiers, even this version, in warfare, isn't a savory prospect: A robot that knows it ought not to torture or execute enemy prisoners in order to exact revenge, desires to refrain from firing upon them, but nonetheless slips into a decision to ruthlessly do so - well, this is probably not the kind of robot the U.S. military is keen on deploying. Unfortunately, for reasons explained below, unless the engineering we recommend is supported and deployed, this might well be the kind of robot that our future holds.\",\"PeriodicalId\":101738,\"journal\":{\"name\":\"2014 IEEE International Symposium on Ethics in Science, Technology and Engineering\",\"volume\":\"130 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-05-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"28\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 IEEE International Symposium on Ethics in Science, Technology and Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ETHICS.2014.6893436\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE International Symposium on Ethics in Science, Technology and Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ETHICS.2014.6893436","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 28

摘要

唉,有些人是卑鄙的。从人类的情况下,我们知道,我们的知识并不新鲜,因为例如柏拉图分析,而很久以前人类所有人现象,或另一个,经验:(1)琼斯知道他不应该说,喝的传递出去,(2)认真的欲望,他不接受这一点,但(3)(在愉快的,诱人的公司他的乐趣和嗜酒如命的伙伴)陷入一系列决定高杯酒高杯酒,直到collapse.1现在;机器人会患上akrasia吗?值得庆幸的是,没有:只有人类才会被这种疾病所困扰(因为只有人类才能拥有完全成熟的人体意识,而机器人不可能是人类)。但是一个机器人会被纯粹的——按照波洛克(1995)的说法——“智力”版的akrasia所折磨吗?是的,对于与美国人类士兵合作的机器人来说,即使是这种版本,在战争中,也不是一个令人愉快的前景:一个机器人知道它不应该为了报复而折磨或处决敌人的囚犯,希望避免向他们开火,但仍然决定无情地这样做——好吧,这可能不是美国军方热衷于部署的那种机器人。不幸的是,由于下面解释的原因,除非我们推荐的工程得到支持和部署,否则这很可能是我们未来所拥有的那种机器人。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Akratic robots and the computational logic thereof
Alas, there are akratic persons. We know this from the human case, and our knowledge is nothing new, since for instance Plato analyzed rather long ago a phenomenon all human persons, at one point or another, experience: (1) Jones knows that he ought not to - say - drink to the point of passing out, (2) earnestly desires that he not imbibe to this point, but (3) nonetheless (in the pleasant, seductive company of his fun and hard-drinking buddies) slips into a series of decisions to have highball upon highball, until collapse.1 Now; could a robot suffer from akrasia? Thankfully, no: only persons can be plagued by this disease (since only persons can have full-blown P-consciousness2, and robots can't be persons (Bringsjord 1992). But could a robot be afflicted by a purely - to follow Pollock (1995) - “intellectual” version of akrasia? Yes, and for robots collaborating with American human soldiers, even this version, in warfare, isn't a savory prospect: A robot that knows it ought not to torture or execute enemy prisoners in order to exact revenge, desires to refrain from firing upon them, but nonetheless slips into a decision to ruthlessly do so - well, this is probably not the kind of robot the U.S. military is keen on deploying. Unfortunately, for reasons explained below, unless the engineering we recommend is supported and deployed, this might well be the kind of robot that our future holds.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信