{"title":"Ethical Risks in the Practice of Artificial Intelligence","authors":"V. Kazaryan, Ksenia Shutova","doi":"10.17212/2075-0862-2023-15.2.2-346-364","DOIUrl":null,"url":null,"abstract":"The article raises the question of the relationship between the practice of artificial intelligence and universal ethics. The topic is fundamentally important, since a person and society that have lost their ethical foundations are deprived of their humanity. The ethics of scientists, designers, high-level managers play a decisive role in modern processes of development and application of artificial intelligence. In their activities, an ethic of responsibility develops, originating from the Russell-Einstein Manifesto, published in 1955 in the conditions of a nuclear standoff during the Cold War. The article shows the modern indescribably rapid growth and development of artificial intelligence applications in the practical life of people. Attention is drawn to the fact of the presence of uncertainty in situations of practical application of artificial intelligence, the presence of unforeseen consequences in addition to the expected consequences, who is responsible for those consequences (individuals, corporations, governments). The responsibility lies with the one who decides on the action: the actor. The actor is in a situation of ethical risk. It is shown that the risk increases due to a number of circumstances: 1) a variety of applications; 2) uncontrolled rampant growth; 3) difficulties in tracking the empirical situation of application; 4) difficulties in theoretical analysis of the situation of action. The article focuses on the risks of the practice of peaceful use of remotely piloted aircraft, as well as their military use and automated weapons without operator confirmation. The sharp, apparently exponential growth of information technology, the practical implementation of artificial intelligence, puts people in a difficult situation, an ethical situation: what to choose ‘to have or to be’. Or is there a third choice?","PeriodicalId":336825,"journal":{"name":"Ideas and Ideals","volume":"55 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ideas and Ideals","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.17212/2075-0862-2023-15.2.2-346-364","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The article raises the question of the relationship between the practice of artificial intelligence and universal ethics. The topic is fundamentally important, since a person and society that have lost their ethical foundations are deprived of their humanity. The ethics of scientists, designers, high-level managers play a decisive role in modern processes of development and application of artificial intelligence. In their activities, an ethic of responsibility develops, originating from the Russell-Einstein Manifesto, published in 1955 in the conditions of a nuclear standoff during the Cold War. The article shows the modern indescribably rapid growth and development of artificial intelligence applications in the practical life of people. Attention is drawn to the fact of the presence of uncertainty in situations of practical application of artificial intelligence, the presence of unforeseen consequences in addition to the expected consequences, who is responsible for those consequences (individuals, corporations, governments). The responsibility lies with the one who decides on the action: the actor. The actor is in a situation of ethical risk. It is shown that the risk increases due to a number of circumstances: 1) a variety of applications; 2) uncontrolled rampant growth; 3) difficulties in tracking the empirical situation of application; 4) difficulties in theoretical analysis of the situation of action. The article focuses on the risks of the practice of peaceful use of remotely piloted aircraft, as well as their military use and automated weapons without operator confirmation. The sharp, apparently exponential growth of information technology, the practical implementation of artificial intelligence, puts people in a difficult situation, an ethical situation: what to choose ‘to have or to be’. Or is there a third choice?