Lethal Autonomous Weapons最新文献

筛选
英文 中文
Empirical Data on Attitudes Toward Autonomous Systems 对自治系统态度的实证数据
Lethal Autonomous Weapons Pub Date : 1900-01-01 DOI: 10.1093/oso/9780197546048.003.0010
Jai C. Galliott, Bianca Baggiarini, Sean Rupka
{"title":"Empirical Data on Attitudes Toward Autonomous Systems","authors":"Jai C. Galliott, Bianca Baggiarini, Sean Rupka","doi":"10.1093/oso/9780197546048.003.0010","DOIUrl":"https://doi.org/10.1093/oso/9780197546048.003.0010","url":null,"abstract":"Combat automation, enabled by rapid technological advancements in artificial intelligence and machine learning, is a guiding principle in the conduct of war today. Yet, empirical data on the impact of algorithmic combat on military personnel remains limited. This chapter draws on data from a historically unprecedented survey of Australian Defence Force Academy cadets. Given that this generation of trainees will be the first to deploy autonomous systems (AS) in a systematic way, their views are especially important. This chapter focuses its analysis on five themes: the dynamics of human-machine teams; the perceived risks, benefits, and capabilities of AS; the changing nature of (and respect for) military labor and incentives; preferences to oversee a robot, versus carrying out a mission themselves; and the changing meaning of soldiering. We utilize the survey data to explore the interconnected consequences of neoliberal governing for cadets’ attitudes toward AS, and citizen-soldiering more broadly. Overall, this chapter argues that Australian cadets are open to working with and alongside AS, but under the right conditions. Armed forces, in an attempt to capitalize on these technologically savvy cadets, have shifted from institutional to occupational employers. However, in our concluding remarks, we caution against unchecked technological fetishism, highlighting the need to critically question the risks of AS on moral deskilling, and the application of market-based notions of freedom to the military domain.","PeriodicalId":145178,"journal":{"name":"Lethal Autonomous Weapons","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115339110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Robot Dogs of War 战争机器狗
Lethal Autonomous Weapons Pub Date : 1900-01-01 DOI: 10.1093/oso/9780197546048.003.0003
D. Baker
{"title":"The Robot Dogs of War","authors":"D. Baker","doi":"10.1093/oso/9780197546048.003.0003","DOIUrl":"https://doi.org/10.1093/oso/9780197546048.003.0003","url":null,"abstract":"The prospect of robotic warriors striding the battlefield has, somewhat unsurprisingly, been shaped by perceptions drawn from science fiction. While illustrative, such comparisons are largely unhelpful for those considering potential ethical implications of autonomous weapons systems. In this chapter, I offer two alternative sources for ethical comparison. Drawing from military history and current practice for guidance, this chapter highlights the parallels that make mercenaries—the ‘dogs of war’—and military working dogs—the actual dogs of war—useful lenses through which to consider Lethal Autonomous Weapons Systems—the robot dogs of war. Through these comparisons, I demonstrate that some of the most commonly raised ethical objections to autonomous weapon systems are overstated, misguided, or otherwise dependent on outside circumstance.","PeriodicalId":145178,"journal":{"name":"Lethal Autonomous Weapons","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126273824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Better Instincts of Humanity: Humanitarian Arguments in Defense of International Arms Control 人性更好的本能:为国际军备控制辩护的人道主义论点
Lethal Autonomous Weapons Pub Date : 1900-01-01 DOI: 10.1093/oso/9780197546048.003.0008
Natalia Jevglevskaja, Rain Liivoja
{"title":"The Better Instincts of Humanity: Humanitarian Arguments in Defense of International Arms Control","authors":"Natalia Jevglevskaja, Rain Liivoja","doi":"10.1093/oso/9780197546048.003.0008","DOIUrl":"https://doi.org/10.1093/oso/9780197546048.003.0008","url":null,"abstract":"Disagreements about the humanitarian risk-benefit balance of weapons technology are not new. The history of arms control negotiations offers many examples of weaponry that was regarded ‘inhumane’ by some, while hailed by others as a means to reduce injury or suffering in conflict. The debate about autonomous weapons systems reflects this dynamic, yet also stands out in some respects, notably largely hypothetical nature of concerns raised in regard to these systems as well as ostensible disparities in States’ approaches to conceptualizing autonomy. This chapter considers how misconceptions surrounding autonomous weapons technology impede the progress of the deliberations of the Group of Governmental Experts on Lethal Autonomous Weapons Systems. An obvious tendency to focus on the perceived risks posed by these systems, much more so than potential operational and humanitarian advantages they offer, is likely to jeopardize the prospect of finding a meaningful resolution to the debate.","PeriodicalId":145178,"journal":{"name":"Lethal Autonomous Weapons","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114527202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
May Machines Take Lives to Save Lives? Human Perceptions of Autonomous Robots (with the Capacity to Kill) 机器会夺走生命来拯救生命吗?人类对自主机器人的感知(具有杀人能力)
Lethal Autonomous Weapons Pub Date : 1900-01-01 DOI: 10.1093/oso/9780197546048.003.0007
Matthias Scheutz, B. Malle
{"title":"May Machines Take Lives to Save Lives? Human Perceptions of Autonomous Robots (with the Capacity to Kill)","authors":"Matthias Scheutz, B. Malle","doi":"10.1093/oso/9780197546048.003.0007","DOIUrl":"https://doi.org/10.1093/oso/9780197546048.003.0007","url":null,"abstract":"In the future, artificial agents are likely to make life-and-death decisions about humans. Ordinary people are the likely arbiters of whether these decisions are morally acceptable. We summarize research on how ordinary people evaluate artificial (compared to human) agents that make life-and-death decisions. The results suggest that many people are inclined to morally evaluate artificial agents’ decisions, and when asked how the artificial and human agents should decide, they impose the same norms on them. However, when confronted with how the agents did in fact decide, people judge the artificial agents’ decisions differently from those of humans. This difference is best explained by justifications people grant the human agents (imagining their experience of the decision situation) but do not grant the artificial agent (whose experience they cannot imagine). If people fail to infer the decision processes and justifications of artificial agents, these agents will have to explicitly communicate such justifications to people, so they can understand and accept their decisions.","PeriodicalId":145178,"journal":{"name":"Lethal Autonomous Weapons","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121900190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信