{"title":"使用基于大型语言模型的生成代理模拟现象意识","authors":"Hanzhong Zhang , Jibin Yin , Haoyang Wang , Ziwei Xiang","doi":"10.1016/j.asoc.2025.113922","DOIUrl":null,"url":null,"abstract":"<div><div>Large Language Models (LLMs) still face challenges in tasks requiring understanding implicit instructions and applying common-sense knowledge. In such scenarios, LLMs may require multiple attempts to achieve human-level performance, potentially leading to inaccurate responses or inferences in practical environments, affecting their long-term consistency and behavior. This paper introduces the Internal Time-Consciousness Machine (ITCM), a computational consciousness structure to simulate the process of human consciousness. We further propose the ITCM-based Agent (ITCMA), which supports action generation and reasoning in open-world settings, and can independently complete tasks. ITCMA enhances LLMs’ ability to understand implicit instructions and apply common-sense knowledge by considering agents’ interaction and reasoning with the environment. The trained ITCMA performs better than state-of-the-art (SOTA) in the seen set. Even untrained ITCMA can achieve higher task completion rates than SOTA on the seen set, indicating its superiority over traditional intelligent agents in utility and generalization. In real-world tasks with quadruped robots, the task completion rate of untrained ITCMA is close to its performance in the unseen set, demonstrating its comparable utility and universality in real-world settings.</div><div>CCS Concepts: <span><math><mo>∙</mo></math></span> Human-centered computing <span><math><mo>→</mo></math></span> Interactive systems and tools; <span><math><mo>∙</mo></math></span> Computing methodologies <span><math><mo>→</mo></math></span> Natural language processing.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"185 ","pages":"Article 113922"},"PeriodicalIF":6.6000,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Simulating phenomenal consciousness using generative agents based on large language models\",\"authors\":\"Hanzhong Zhang , Jibin Yin , Haoyang Wang , Ziwei Xiang\",\"doi\":\"10.1016/j.asoc.2025.113922\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Large Language Models (LLMs) still face challenges in tasks requiring understanding implicit instructions and applying common-sense knowledge. In such scenarios, LLMs may require multiple attempts to achieve human-level performance, potentially leading to inaccurate responses or inferences in practical environments, affecting their long-term consistency and behavior. This paper introduces the Internal Time-Consciousness Machine (ITCM), a computational consciousness structure to simulate the process of human consciousness. We further propose the ITCM-based Agent (ITCMA), which supports action generation and reasoning in open-world settings, and can independently complete tasks. ITCMA enhances LLMs’ ability to understand implicit instructions and apply common-sense knowledge by considering agents’ interaction and reasoning with the environment. The trained ITCMA performs better than state-of-the-art (SOTA) in the seen set. Even untrained ITCMA can achieve higher task completion rates than SOTA on the seen set, indicating its superiority over traditional intelligent agents in utility and generalization. In real-world tasks with quadruped robots, the task completion rate of untrained ITCMA is close to its performance in the unseen set, demonstrating its comparable utility and universality in real-world settings.</div><div>CCS Concepts: <span><math><mo>∙</mo></math></span> Human-centered computing <span><math><mo>→</mo></math></span> Interactive systems and tools; <span><math><mo>∙</mo></math></span> Computing methodologies <span><math><mo>→</mo></math></span> Natural language processing.</div></div>\",\"PeriodicalId\":50737,\"journal\":{\"name\":\"Applied Soft Computing\",\"volume\":\"185 \",\"pages\":\"Article 113922\"},\"PeriodicalIF\":6.6000,\"publicationDate\":\"2025-09-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Soft Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1568494625012359\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Soft Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1568494625012359","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Simulating phenomenal consciousness using generative agents based on large language models
Large Language Models (LLMs) still face challenges in tasks requiring understanding implicit instructions and applying common-sense knowledge. In such scenarios, LLMs may require multiple attempts to achieve human-level performance, potentially leading to inaccurate responses or inferences in practical environments, affecting their long-term consistency and behavior. This paper introduces the Internal Time-Consciousness Machine (ITCM), a computational consciousness structure to simulate the process of human consciousness. We further propose the ITCM-based Agent (ITCMA), which supports action generation and reasoning in open-world settings, and can independently complete tasks. ITCMA enhances LLMs’ ability to understand implicit instructions and apply common-sense knowledge by considering agents’ interaction and reasoning with the environment. The trained ITCMA performs better than state-of-the-art (SOTA) in the seen set. Even untrained ITCMA can achieve higher task completion rates than SOTA on the seen set, indicating its superiority over traditional intelligent agents in utility and generalization. In real-world tasks with quadruped robots, the task completion rate of untrained ITCMA is close to its performance in the unseen set, demonstrating its comparable utility and universality in real-world settings.
CCS Concepts: Human-centered computing Interactive systems and tools; Computing methodologies Natural language processing.
期刊介绍:
Applied Soft Computing is an international journal promoting an integrated view of soft computing to solve real life problems.The focus is to publish the highest quality research in application and convergence of the areas of Fuzzy Logic, Neural Networks, Evolutionary Computing, Rough Sets and other similar techniques to address real world complexities.
Applied Soft Computing is a rolling publication: articles are published as soon as the editor-in-chief has accepted them. Therefore, the web site will continuously be updated with new articles and the publication time will be short.