{"title":"Harnessing Large Language Models to Simulate Realistic Human Responses to Social Engineering Attacks: A Case Study","authors":"Mohammad Asfour, Juan Carlos Murillo","doi":"10.52306/2578-3289.1172","DOIUrl":null,"url":null,"abstract":"The research publication, “Generative Agents: Interactive Simulacra of Human Behavior,” by Stanford and Google in 2023 established that large language models (LLMs) such as GPT-4 can generate interactive agents with credible and emergent human-like behaviors. However, their application in simulating human responses in cybersecurity scenarios, particularly in social engineering attacks, remains unexplored. In addressing that gap, this study explores the potential of LLMs, specifically the Open AI GPT-4 model, to simulate a broad spectrum of human responses to social engineering attacks that exploit human social behaviors, framing our primary research question: How does the simulated behavior of human targets, based on the Big Five personality traits, responds to social engineering attacks? . This study aims to provide valuable insights for organizations and researchers striving to systematically analyze human behavior and identify prevalent human qualities, as defined by the Big Five personality traits, that are susceptible to social engineering attacks, specifically phishing emails. Also, it intends to offer recommendations for the cybersecurity industry and policymakers on mitigating these risks. The findings indicate that LLMs can provide realistic simulations of human responses to social engineering attacks, highlighting certain personality traits as more susceptible.","PeriodicalId":492275,"journal":{"name":"International journal of cybersecurity intelligence and cybercrime","volume":"115 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of cybersecurity intelligence and cybercrime","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.52306/2578-3289.1172","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The research publication, “Generative Agents: Interactive Simulacra of Human Behavior,” by Stanford and Google in 2023 established that large language models (LLMs) such as GPT-4 can generate interactive agents with credible and emergent human-like behaviors. However, their application in simulating human responses in cybersecurity scenarios, particularly in social engineering attacks, remains unexplored. In addressing that gap, this study explores the potential of LLMs, specifically the Open AI GPT-4 model, to simulate a broad spectrum of human responses to social engineering attacks that exploit human social behaviors, framing our primary research question: How does the simulated behavior of human targets, based on the Big Five personality traits, responds to social engineering attacks? . This study aims to provide valuable insights for organizations and researchers striving to systematically analyze human behavior and identify prevalent human qualities, as defined by the Big Five personality traits, that are susceptible to social engineering attacks, specifically phishing emails. Also, it intends to offer recommendations for the cybersecurity industry and policymakers on mitigating these risks. The findings indicate that LLMs can provide realistic simulations of human responses to social engineering attacks, highlighting certain personality traits as more susceptible.
斯坦福大学和谷歌在2023年发表的研究论文《生成代理:人类行为的交互式模拟》(Generative Agents: Interactive Simulacra of Human Behavior)中指出,像GPT-4这样的大型语言模型(llm)可以生成具有可信和紧急人类行为的交互式代理。然而,它们在网络安全场景中模拟人类反应的应用,特别是在社会工程攻击中,仍未被探索。为了解决这一差距,本研究探索了法学硕士的潜力,特别是开放人工智能GPT-4模型,以模拟人类对利用人类社会行为的社会工程攻击的广泛反应,构建了我们的主要研究问题:基于五大人格特征的人类目标的模拟行为如何响应社会工程攻击?. 本研究旨在为组织和研究人员提供有价值的见解,这些组织和研究人员正在努力系统地分析人类行为,并确定由五大人格特征定义的普遍的人类品质,这些品质容易受到社会工程攻击,特别是网络钓鱼电子邮件的影响。此外,它还打算为网络安全行业和政策制定者提供减轻这些风险的建议。研究结果表明,法学硕士可以提供人类对社会工程攻击反应的真实模拟,突出某些更容易受到影响的人格特征。