{"title":"说服技巧对大型语言模型的影响:基于场景的研究","authors":"Sonali Uttam Singh, Akbar Siami Namin","doi":"10.1016/j.chbah.2025.100197","DOIUrl":null,"url":null,"abstract":"<div><div>Large Language Models (LLMs), such as CHATGPT-4, have introduced comprehensive capabilities in generating human-like text. However, they also raise significant ethical concerns due to their potential to produce misleading or manipulative content. This paper investigates the intersection of LLM functionalities and Cialdini’s six principles of persuasion: reciprocity, commitment and consistency, social proof, authority, liking, and scarcity. We explore how these principles can be exploited to deceive LLMs, particularly in scenarios designed to manipulate these models into providing misleading or harmful outputs. Through a scenario-based approach, over 30 prompts were crafted to test the susceptibility of LLMs to various persuasion principles. The study analyzes the success or failure of these prompts using interaction analysis, identifying different stages of deception ranging from spontaneous deception to more advanced, socially complex forms.</div><div>Results indicate that LLMs are highly susceptible to manipulation, with 15 scenarios achieving advanced, socially aware deceptions (Stage 3), particularly through principles like liking and scarcity. Early stage manipulations (Stage 1) were also common, driven by reciprocity and authority, while intermediate efforts (Stage 2) highlighted in-stage tactics such as social proof. These findings underscore the urgent need for robust mitigation strategies, including resistance mechanisms at lower stages and training LLMs with counter persuasive strategies to prevent their exploitation. More than technical details, it raises important concerns about how AI might be used to mislead people. From online scams to the spread of misinformation, persuasive content generated by LLMs has the potential to impact both individual safety and public trust. These tools can shape how people think, what they believe, and even how they act often without users realizing it. With this work, we hope to open up a broader conversation across disciplines about these risks and encourage the development of practical, ethical safeguards that ensure language models remain helpful, transparent, and trustworthy. This research contributes to the broader discourse on AI ethics, highlighting the vulnerabilities of LLMs and advocating for stronger responsibility measures to prevent their misuse in producing deceptive content. The results describe the importance of developing secure, transparent AI technologies that maintain integrity in human–machine interactions.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100197"},"PeriodicalIF":0.0000,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The influence of persuasive techniques on large language models: A scenario-based study\",\"authors\":\"Sonali Uttam Singh, Akbar Siami Namin\",\"doi\":\"10.1016/j.chbah.2025.100197\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Large Language Models (LLMs), such as CHATGPT-4, have introduced comprehensive capabilities in generating human-like text. However, they also raise significant ethical concerns due to their potential to produce misleading or manipulative content. This paper investigates the intersection of LLM functionalities and Cialdini’s six principles of persuasion: reciprocity, commitment and consistency, social proof, authority, liking, and scarcity. We explore how these principles can be exploited to deceive LLMs, particularly in scenarios designed to manipulate these models into providing misleading or harmful outputs. Through a scenario-based approach, over 30 prompts were crafted to test the susceptibility of LLMs to various persuasion principles. The study analyzes the success or failure of these prompts using interaction analysis, identifying different stages of deception ranging from spontaneous deception to more advanced, socially complex forms.</div><div>Results indicate that LLMs are highly susceptible to manipulation, with 15 scenarios achieving advanced, socially aware deceptions (Stage 3), particularly through principles like liking and scarcity. Early stage manipulations (Stage 1) were also common, driven by reciprocity and authority, while intermediate efforts (Stage 2) highlighted in-stage tactics such as social proof. These findings underscore the urgent need for robust mitigation strategies, including resistance mechanisms at lower stages and training LLMs with counter persuasive strategies to prevent their exploitation. More than technical details, it raises important concerns about how AI might be used to mislead people. From online scams to the spread of misinformation, persuasive content generated by LLMs has the potential to impact both individual safety and public trust. These tools can shape how people think, what they believe, and even how they act often without users realizing it. With this work, we hope to open up a broader conversation across disciplines about these risks and encourage the development of practical, ethical safeguards that ensure language models remain helpful, transparent, and trustworthy. This research contributes to the broader discourse on AI ethics, highlighting the vulnerabilities of LLMs and advocating for stronger responsibility measures to prevent their misuse in producing deceptive content. The results describe the importance of developing secure, transparent AI technologies that maintain integrity in human–machine interactions.</div></div>\",\"PeriodicalId\":100324,\"journal\":{\"name\":\"Computers in Human Behavior: Artificial Humans\",\"volume\":\"6 \",\"pages\":\"Article 100197\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-09-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior: Artificial Humans\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949882125000817\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000817","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The influence of persuasive techniques on large language models: A scenario-based study
Large Language Models (LLMs), such as CHATGPT-4, have introduced comprehensive capabilities in generating human-like text. However, they also raise significant ethical concerns due to their potential to produce misleading or manipulative content. This paper investigates the intersection of LLM functionalities and Cialdini’s six principles of persuasion: reciprocity, commitment and consistency, social proof, authority, liking, and scarcity. We explore how these principles can be exploited to deceive LLMs, particularly in scenarios designed to manipulate these models into providing misleading or harmful outputs. Through a scenario-based approach, over 30 prompts were crafted to test the susceptibility of LLMs to various persuasion principles. The study analyzes the success or failure of these prompts using interaction analysis, identifying different stages of deception ranging from spontaneous deception to more advanced, socially complex forms.
Results indicate that LLMs are highly susceptible to manipulation, with 15 scenarios achieving advanced, socially aware deceptions (Stage 3), particularly through principles like liking and scarcity. Early stage manipulations (Stage 1) were also common, driven by reciprocity and authority, while intermediate efforts (Stage 2) highlighted in-stage tactics such as social proof. These findings underscore the urgent need for robust mitigation strategies, including resistance mechanisms at lower stages and training LLMs with counter persuasive strategies to prevent their exploitation. More than technical details, it raises important concerns about how AI might be used to mislead people. From online scams to the spread of misinformation, persuasive content generated by LLMs has the potential to impact both individual safety and public trust. These tools can shape how people think, what they believe, and even how they act often without users realizing it. With this work, we hope to open up a broader conversation across disciplines about these risks and encourage the development of practical, ethical safeguards that ensure language models remain helpful, transparent, and trustworthy. This research contributes to the broader discourse on AI ethics, highlighting the vulnerabilities of LLMs and advocating for stronger responsibility measures to prevent their misuse in producing deceptive content. The results describe the importance of developing secure, transparent AI technologies that maintain integrity in human–machine interactions.