人工智能实践中的伦理风险

V. Kazaryan, Ksenia Shutova
{"title":"人工智能实践中的伦理风险","authors":"V. Kazaryan, Ksenia Shutova","doi":"10.17212/2075-0862-2023-15.2.2-346-364","DOIUrl":null,"url":null,"abstract":"The article raises the question of the relationship between the practice of artificial intelligence and universal ethics. The topic is fundamentally important, since a person and society that have lost their ethical foundations are deprived of their humanity. The ethics of scientists, designers, high-level managers play a decisive role in modern processes of development and application of artificial intelligence. In their activities, an ethic of responsibility develops, originating from the Russell-Einstein Manifesto, published in 1955 in the conditions of a nuclear standoff during the Cold War. The article shows the modern indescribably rapid growth and development of artificial intelligence applications in the practical life of people. Attention is drawn to the fact of the presence of uncertainty in situations of practical application of artificial intelligence, the presence of unforeseen consequences in addition to the expected consequences, who is responsible for those consequences (individuals, corporations, governments). The responsibility lies with the one who decides on the action: the actor. The actor is in a situation of ethical risk. It is shown that the risk increases due to a number of circumstances: 1) a variety of applications; 2) uncontrolled rampant growth; 3) difficulties in tracking the empirical situation of application; 4) difficulties in theoretical analysis of the situation of action. The article focuses on the risks of the practice of peaceful use of remotely piloted aircraft, as well as their military use and automated weapons without operator confirmation. The sharp, apparently exponential growth of information technology, the practical implementation of artificial intelligence, puts people in a difficult situation, an ethical situation: what to choose ‘to have or to be’. Or is there a third choice?","PeriodicalId":336825,"journal":{"name":"Ideas and Ideals","volume":"55 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Ethical Risks in the Practice of Artificial Intelligence\",\"authors\":\"V. Kazaryan, Ksenia Shutova\",\"doi\":\"10.17212/2075-0862-2023-15.2.2-346-364\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The article raises the question of the relationship between the practice of artificial intelligence and universal ethics. The topic is fundamentally important, since a person and society that have lost their ethical foundations are deprived of their humanity. The ethics of scientists, designers, high-level managers play a decisive role in modern processes of development and application of artificial intelligence. In their activities, an ethic of responsibility develops, originating from the Russell-Einstein Manifesto, published in 1955 in the conditions of a nuclear standoff during the Cold War. The article shows the modern indescribably rapid growth and development of artificial intelligence applications in the practical life of people. Attention is drawn to the fact of the presence of uncertainty in situations of practical application of artificial intelligence, the presence of unforeseen consequences in addition to the expected consequences, who is responsible for those consequences (individuals, corporations, governments). The responsibility lies with the one who decides on the action: the actor. The actor is in a situation of ethical risk. It is shown that the risk increases due to a number of circumstances: 1) a variety of applications; 2) uncontrolled rampant growth; 3) difficulties in tracking the empirical situation of application; 4) difficulties in theoretical analysis of the situation of action. The article focuses on the risks of the practice of peaceful use of remotely piloted aircraft, as well as their military use and automated weapons without operator confirmation. The sharp, apparently exponential growth of information technology, the practical implementation of artificial intelligence, puts people in a difficult situation, an ethical situation: what to choose ‘to have or to be’. Or is there a third choice?\",\"PeriodicalId\":336825,\"journal\":{\"name\":\"Ideas and Ideals\",\"volume\":\"55 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ideas and Ideals\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.17212/2075-0862-2023-15.2.2-346-364\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ideas and Ideals","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.17212/2075-0862-2023-15.2.2-346-364","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

文章提出了人工智能实践与普遍伦理之间关系的问题。这个话题非常重要,因为一个人或一个社会失去了道德基础,就失去了人性。科学家、设计师、高层管理人员的伦理在人工智能的现代发展和应用过程中起着决定性的作用。在他们的活动中,一种责任伦理发展起来,起源于1955年在冷战期间核对峙的条件下发表的罗素-爱因斯坦宣言。文章展示了现代人工智能在人们实际生活中的快速增长和发展。需要注意的是,在人工智能的实际应用中存在不确定性,除了预期的后果之外,还存在不可预见的后果,谁对这些后果负责(个人、公司、政府)。责任在于决定行动的人:行为人。演员处于道德风险的境地。结果表明,由于多种情况,风险增加:1)各种应用;2)不受控制的猖獗增长;3)应用的实证情况难以追踪;4)行动态势的理论分析困难。这篇文章重点讨论了在未经操作人员确认的情况下,和平使用遥控飞机的做法,以及它们的军事用途和自动武器的风险。信息技术的急剧增长,显然是指数级的增长,人工智能的实际应用,把人们置于一个困难的境地,一个道德的境地:选择“拥有还是成为”。或者还有第三种选择?
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Ethical Risks in the Practice of Artificial Intelligence
The article raises the question of the relationship between the practice of artificial intelligence and universal ethics. The topic is fundamentally important, since a person and society that have lost their ethical foundations are deprived of their humanity. The ethics of scientists, designers, high-level managers play a decisive role in modern processes of development and application of artificial intelligence. In their activities, an ethic of responsibility develops, originating from the Russell-Einstein Manifesto, published in 1955 in the conditions of a nuclear standoff during the Cold War. The article shows the modern indescribably rapid growth and development of artificial intelligence applications in the practical life of people. Attention is drawn to the fact of the presence of uncertainty in situations of practical application of artificial intelligence, the presence of unforeseen consequences in addition to the expected consequences, who is responsible for those consequences (individuals, corporations, governments). The responsibility lies with the one who decides on the action: the actor. The actor is in a situation of ethical risk. It is shown that the risk increases due to a number of circumstances: 1) a variety of applications; 2) uncontrolled rampant growth; 3) difficulties in tracking the empirical situation of application; 4) difficulties in theoretical analysis of the situation of action. The article focuses on the risks of the practice of peaceful use of remotely piloted aircraft, as well as their military use and automated weapons without operator confirmation. The sharp, apparently exponential growth of information technology, the practical implementation of artificial intelligence, puts people in a difficult situation, an ethical situation: what to choose ‘to have or to be’. Or is there a third choice?
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信