Wenyuan Zhang, Jiawei Sheng, Shuaiyi Nie, Zefeng Zhang, Xinghua Zhang, Yongquan He, Tingwen Liu
{"title":"揭示在 LLM 角色扮演中检测角色知识错误所面临的挑战","authors":"Wenyuan Zhang, Jiawei Sheng, Shuaiyi Nie, Zefeng Zhang, Xinghua Zhang, Yongquan He, Tingwen Liu","doi":"arxiv-2409.11726","DOIUrl":null,"url":null,"abstract":"Large language model (LLM) role-playing has gained widespread attention,\nwhere the authentic character knowledge is crucial for constructing realistic\nLLM role-playing agents. However, existing works usually overlook the\nexploration of LLMs' ability to detect characters' known knowledge errors (KKE)\nand unknown knowledge errors (UKE) while playing roles, which would lead to\nlow-quality automatic construction of character trainable corpus. In this\npaper, we propose a probing dataset to evaluate LLMs' ability to detect errors\nin KKE and UKE. The results indicate that even the latest LLMs struggle to\neffectively detect these two types of errors, especially when it comes to\nfamiliar knowledge. We experimented with various reasoning strategies and\npropose an agent-based reasoning method, Self-Recollection and Self-Doubt\n(S2RD), to further explore the potential for improving error detection\ncapabilities. Experiments show that our method effectively improves the LLMs'\nability to detect error character knowledge, but it remains an issue that\nrequires ongoing attention.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"6 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Revealing the Challenge of Detecting Character Knowledge Errors in LLM Role-Playing\",\"authors\":\"Wenyuan Zhang, Jiawei Sheng, Shuaiyi Nie, Zefeng Zhang, Xinghua Zhang, Yongquan He, Tingwen Liu\",\"doi\":\"arxiv-2409.11726\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large language model (LLM) role-playing has gained widespread attention,\\nwhere the authentic character knowledge is crucial for constructing realistic\\nLLM role-playing agents. However, existing works usually overlook the\\nexploration of LLMs' ability to detect characters' known knowledge errors (KKE)\\nand unknown knowledge errors (UKE) while playing roles, which would lead to\\nlow-quality automatic construction of character trainable corpus. In this\\npaper, we propose a probing dataset to evaluate LLMs' ability to detect errors\\nin KKE and UKE. The results indicate that even the latest LLMs struggle to\\neffectively detect these two types of errors, especially when it comes to\\nfamiliar knowledge. We experimented with various reasoning strategies and\\npropose an agent-based reasoning method, Self-Recollection and Self-Doubt\\n(S2RD), to further explore the potential for improving error detection\\ncapabilities. Experiments show that our method effectively improves the LLMs'\\nability to detect error character knowledge, but it remains an issue that\\nrequires ongoing attention.\",\"PeriodicalId\":501541,\"journal\":{\"name\":\"arXiv - CS - Human-Computer Interaction\",\"volume\":\"6 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Human-Computer Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11726\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Human-Computer Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11726","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Revealing the Challenge of Detecting Character Knowledge Errors in LLM Role-Playing
Large language model (LLM) role-playing has gained widespread attention,
where the authentic character knowledge is crucial for constructing realistic
LLM role-playing agents. However, existing works usually overlook the
exploration of LLMs' ability to detect characters' known knowledge errors (KKE)
and unknown knowledge errors (UKE) while playing roles, which would lead to
low-quality automatic construction of character trainable corpus. In this
paper, we propose a probing dataset to evaluate LLMs' ability to detect errors
in KKE and UKE. The results indicate that even the latest LLMs struggle to
effectively detect these two types of errors, especially when it comes to
familiar knowledge. We experimented with various reasoning strategies and
propose an agent-based reasoning method, Self-Recollection and Self-Doubt
(S2RD), to further explore the potential for improving error detection
capabilities. Experiments show that our method effectively improves the LLMs'
ability to detect error character knowledge, but it remains an issue that
requires ongoing attention.