{"title":"Self-Emotion Blended Dialogue Generation in Social Simulation Agents","authors":"Qiang Zhang, Jason Naradowsky, Yusuke Miyao","doi":"arxiv-2408.01633","DOIUrl":null,"url":null,"abstract":"When engaging in conversations, dialogue agents in a virtual simulation\nenvironment may exhibit their own emotional states that are unrelated to the\nimmediate conversational context, a phenomenon known as self-emotion. This\nstudy explores how such self-emotion affects the agents' behaviors in dialogue\nstrategies and decision-making within a large language model (LLM)-driven\nsimulation framework. In a dialogue strategy prediction experiment, we analyze\nthe dialogue strategy choices employed by agents both with and without\nself-emotion, comparing them to those of humans. The results show that\nincorporating self-emotion helps agents exhibit more human-like dialogue\nstrategies. In an independent experiment comparing the performance of models\nfine-tuned on GPT-4 generated dialogue datasets, we demonstrate that\nself-emotion can lead to better overall naturalness and humanness. Finally, in\na virtual simulation environment where agents have discussions on multiple\ntopics, we show that self-emotion of agents can significantly influence the\ndecision-making process of the agents, leading to approximately a 50% change in\ndecisions.","PeriodicalId":501315,"journal":{"name":"arXiv - CS - Multiagent Systems","volume":"119 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multiagent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.01633","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
When engaging in conversations, dialogue agents in a virtual simulation
environment may exhibit their own emotional states that are unrelated to the
immediate conversational context, a phenomenon known as self-emotion. This
study explores how such self-emotion affects the agents' behaviors in dialogue
strategies and decision-making within a large language model (LLM)-driven
simulation framework. In a dialogue strategy prediction experiment, we analyze
the dialogue strategy choices employed by agents both with and without
self-emotion, comparing them to those of humans. The results show that
incorporating self-emotion helps agents exhibit more human-like dialogue
strategies. In an independent experiment comparing the performance of models
fine-tuned on GPT-4 generated dialogue datasets, we demonstrate that
self-emotion can lead to better overall naturalness and humanness. Finally, in
a virtual simulation environment where agents have discussions on multiple
topics, we show that self-emotion of agents can significantly influence the
decision-making process of the agents, leading to approximately a 50% change in
decisions.