Social Engineering and Human-Robot Interactions' Risks

Ilenia Mercuri
{"title":"Social Engineering and Human-Robot Interactions' Risks","authors":"Ilenia Mercuri","doi":"10.54941/ahfe1002199","DOIUrl":null,"url":null,"abstract":"Modern robotics seems to have taken root from the theories of Isaac Asimov, in 1941. One area of research that has become increasingly popular in recent decades is the study of artificial intelligence or A.I., which aims to use machines to solve problems that, according to current opinion, require intelligence. This is related to the study on “Social Robots”. Social Robots are created in order to interact with human beings; they have been designed and programmed to engage with people by leveraging a \"human\" aspect and various interaction channels, such as speech or non-verbal communication. They therefore readily solicit social responsiveness in people who often attribute human qualities to the robot. Social robots exploit the human propensity for anthropomorphism, and humans tend to trust them more and more. Several issues could arise due to this kind of trust and to the ability of “superintelligence” to \"self-evolve\", which could lead to the violation of the purposes for which it was designed by humans, becoming a risk to human security and privacy. This kind of threat concerns social engineering, a set of techniques used to convince users to perform a series of actions that allow cybercriminals to gain access to the victims' resources. The Human Factor is the weakest ring of the security chain, and the social engineers exploit Human-Robots Interaction to persuade an individual to provide private information.An important research area that has shown interesting results for the knowledge of the possibility of human interaction with robots is \"cyberpsychology\". This paper aims to provide insights into how the interaction with social robots could be exploited by humans not only in a positive way but also by using the same techniques of social engineering borrowed from \"bad actors\" or hackers, to achieve malevolent and harmful purposes for man himself. A series of experiments and interesting research results will be shown as examples. In particular, about the ability of robots to gather personal information and display emotions during the interaction with human beings. Is it possible for social robots to feel and show emotions, and human beings could empathize with them? A broad area of research, which goes by the name of \"affective computing\", aims to design machines that are able to recognize human emotions and consistently respond to them. The aim is to apply human-human interaction models to human-machine interaction. There is a fine line that separates the opinions of those who argue that, in the future, machines with artificial intelligence could be a valuable aid to humans and those who believe that they represent a huge risk that could endanger human protection systems and safety. It is necessary to examine in depth this new field of cybersecurity to analyze the best path to protect our future. Are social robots a real danger? Keywords: Human Factor, Cybersecurity, Cyberpsychology, Social Engineering Attacks, Human-Robot Interaction, Robotics, Malicious Artificial Intelligence, Affective Computing, Cyber Threats","PeriodicalId":373044,"journal":{"name":"Human Factors in Cybersecurity","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Factors in Cybersecurity","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54941/ahfe1002199","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Modern robotics seems to have taken root from the theories of Isaac Asimov, in 1941. One area of research that has become increasingly popular in recent decades is the study of artificial intelligence or A.I., which aims to use machines to solve problems that, according to current opinion, require intelligence. This is related to the study on “Social Robots”. Social Robots are created in order to interact with human beings; they have been designed and programmed to engage with people by leveraging a "human" aspect and various interaction channels, such as speech or non-verbal communication. They therefore readily solicit social responsiveness in people who often attribute human qualities to the robot. Social robots exploit the human propensity for anthropomorphism, and humans tend to trust them more and more. Several issues could arise due to this kind of trust and to the ability of “superintelligence” to "self-evolve", which could lead to the violation of the purposes for which it was designed by humans, becoming a risk to human security and privacy. This kind of threat concerns social engineering, a set of techniques used to convince users to perform a series of actions that allow cybercriminals to gain access to the victims' resources. The Human Factor is the weakest ring of the security chain, and the social engineers exploit Human-Robots Interaction to persuade an individual to provide private information.An important research area that has shown interesting results for the knowledge of the possibility of human interaction with robots is "cyberpsychology". This paper aims to provide insights into how the interaction with social robots could be exploited by humans not only in a positive way but also by using the same techniques of social engineering borrowed from "bad actors" or hackers, to achieve malevolent and harmful purposes for man himself. A series of experiments and interesting research results will be shown as examples. In particular, about the ability of robots to gather personal information and display emotions during the interaction with human beings. Is it possible for social robots to feel and show emotions, and human beings could empathize with them? A broad area of research, which goes by the name of "affective computing", aims to design machines that are able to recognize human emotions and consistently respond to them. The aim is to apply human-human interaction models to human-machine interaction. There is a fine line that separates the opinions of those who argue that, in the future, machines with artificial intelligence could be a valuable aid to humans and those who believe that they represent a huge risk that could endanger human protection systems and safety. It is necessary to examine in depth this new field of cybersecurity to analyze the best path to protect our future. Are social robots a real danger? Keywords: Human Factor, Cybersecurity, Cyberpsychology, Social Engineering Attacks, Human-Robot Interaction, Robotics, Malicious Artificial Intelligence, Affective Computing, Cyber Threats
社会工程与人机交互风险
现代机器人似乎是从艾萨克·阿西莫夫(Isaac Asimov)于1941年提出的理论中生根发芽的。近几十年来越来越受欢迎的一个研究领域是对人工智能(artificial intelligence,简称ai)的研究,其目的是利用机器来解决目前认为需要智能的问题。这与“社交机器人”的研究有关。社交机器人是为了与人类互动而创造的;它们被设计和编程为通过利用“人类”方面和各种交互渠道(如语音或非语言交流)与人互动。因此,它们很容易引起人们的社会反应,而这些人往往把人类的品质归因于机器人。社交机器人利用了人类拟人化的倾向,人类也越来越信任它们。由于这种信任和“超级智能”“自我进化”的能力,可能会出现一些问题,这可能导致违反人类设计它的目的,成为对人类安全和隐私的威胁。这类威胁涉及社会工程,这是一套用于说服用户执行一系列操作的技术,这些操作允许网络罪犯访问受害者的资源。人为因素是安全链上最薄弱的一环,社会工程师利用人机交互来说服个人提供私人信息。“网络心理学”是一个重要的研究领域,在人类与机器人互动的可能性方面已经显示出有趣的结果。本文旨在深入了解人类如何利用与社交机器人的互动,不仅以积极的方式,而且还利用从“坏演员”或黑客那里借来的社会工程技术,为人类自己实现恶意和有害的目的。一系列的实验和有趣的研究结果将作为例子展示。特别是关于机器人在与人类互动过程中收集个人信息和表现情感的能力。社交机器人是否有可能感受和表达情感,而人类也能感同身受?一个被称为“情感计算”的广泛研究领域,旨在设计能够识别人类情感并始终对其做出反应的机器。其目的是将人机交互模型应用于人机交互。一些人认为,在未来,具有人工智能的机器可能成为人类的宝贵帮手,而另一些人则认为,它们代表着可能危及人类保护系统和安全的巨大风险,这两种观点之间存在着细微的差别。有必要深入研究这一新的网络安全领域,以分析保护我们未来的最佳途径。社交机器人是真正的危险吗?关键词:人为因素,网络安全,网络心理学,社会工程攻击,人机交互,机器人,恶意人工智能,情感计算,网络威胁
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信