通过llm训练的交叉注意网络增强生成智能体的记忆检索。

IF 2.6 3区 心理学 Q2 PSYCHOLOGY, MULTIDISCIPLINARY
Frontiers in Psychology Pub Date : 2025-05-07 eCollection Date: 2025-01-01 DOI:10.3389/fpsyg.2025.1591618
Chuanyang Hong, Qingyun He
{"title":"通过llm训练的交叉注意网络增强生成智能体的记忆检索。","authors":"Chuanyang Hong, Qingyun He","doi":"10.3389/fpsyg.2025.1591618","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>The surge in the capabilities of large language models (LLMs) has propelled the development of Artificial General Intelligence (AGI), highlighting generative agents as pivotal components for emulating complex AI behaviors. Given the high costs associated with individually training LLMs for each AI agent, there is a critical need for advanced memory retrieval mechanisms to maintain the unique characteristics and memories of individual AI agents.</p><p><strong>Methods: </strong>In this research, we developed a text-based simulation of a generative agent world, constructing a community with multiple agents and locations in which certain levels of interaction were enabled. Within this framework, we introduced a novel memory retrieval system using an Auxiliary Cross Attention Network (ACAN). This system calculates and ranks attention weights between an agent's current state and stored memories, selecting the most relevant memories for any given situation. In a novel approach, we incorporated LLM assistance, comparing memories retrieved by our model with those extracted using a base method during training, and constructing a novel loss function based on these comparisons to optimize the training process effectively. To our knowledge, this is the first study to utilize LLMs to train a dedicated agent memory retrieval network.</p><p><strong>Results: </strong>Our empirical evaluations demonstrate that this approach substantially enhances the quality of memory retrieval, thereby increasing the adaptability and behavioral consistency of agents in fluctuating environments.</p><p><strong>Discussion: </strong>Our findings not only introduce new perspectives and methodologies for memory retrieval in generative agents but also extend the utility of LLMs in memory management across varied AI agent applications.</p>","PeriodicalId":12525,"journal":{"name":"Frontiers in Psychology","volume":"16 ","pages":"1591618"},"PeriodicalIF":2.6000,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092450/pdf/","citationCount":"0","resultStr":"{\"title\":\"Enhancing memory retrieval in generative agents through LLM-trained cross attention networks.\",\"authors\":\"Chuanyang Hong, Qingyun He\",\"doi\":\"10.3389/fpsyg.2025.1591618\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Introduction: </strong>The surge in the capabilities of large language models (LLMs) has propelled the development of Artificial General Intelligence (AGI), highlighting generative agents as pivotal components for emulating complex AI behaviors. Given the high costs associated with individually training LLMs for each AI agent, there is a critical need for advanced memory retrieval mechanisms to maintain the unique characteristics and memories of individual AI agents.</p><p><strong>Methods: </strong>In this research, we developed a text-based simulation of a generative agent world, constructing a community with multiple agents and locations in which certain levels of interaction were enabled. Within this framework, we introduced a novel memory retrieval system using an Auxiliary Cross Attention Network (ACAN). This system calculates and ranks attention weights between an agent's current state and stored memories, selecting the most relevant memories for any given situation. In a novel approach, we incorporated LLM assistance, comparing memories retrieved by our model with those extracted using a base method during training, and constructing a novel loss function based on these comparisons to optimize the training process effectively. To our knowledge, this is the first study to utilize LLMs to train a dedicated agent memory retrieval network.</p><p><strong>Results: </strong>Our empirical evaluations demonstrate that this approach substantially enhances the quality of memory retrieval, thereby increasing the adaptability and behavioral consistency of agents in fluctuating environments.</p><p><strong>Discussion: </strong>Our findings not only introduce new perspectives and methodologies for memory retrieval in generative agents but also extend the utility of LLMs in memory management across varied AI agent applications.</p>\",\"PeriodicalId\":12525,\"journal\":{\"name\":\"Frontiers in Psychology\",\"volume\":\"16 \",\"pages\":\"1591618\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2025-05-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092450/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in Psychology\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.3389/fpsyg.2025.1591618\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"PSYCHOLOGY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Psychology","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.3389/fpsyg.2025.1591618","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

大型语言模型(llm)能力的激增推动了人工通用智能(AGI)的发展,突出了生成代理作为模拟复杂人工智能行为的关键组件。考虑到为每个人工智能代理单独训练llm的高成本,迫切需要先进的记忆检索机制来保持单个人工智能代理的独特特征和记忆。方法:在本研究中,我们开发了一个基于文本的生成智能体世界模拟,构建了一个具有多个智能体和位置的社区,在这些社区中可以实现一定程度的交互。在此框架下,我们引入了一种新的使用辅助交叉注意网络(ACAN)的记忆检索系统。该系统计算并排列代理当前状态和存储记忆之间的注意力权重,为任何给定情况选择最相关的记忆。在一种新颖的方法中,我们结合了LLM辅助,将我们的模型检索到的记忆与在训练过程中使用基方法提取的记忆进行比较,并基于这些比较构建新的损失函数来有效地优化训练过程。据我们所知,这是第一个利用llm来训练专用代理记忆检索网络的研究。结果:我们的实证评估表明,这种方法大大提高了记忆检索的质量,从而提高了代理在波动环境中的适应性和行为一致性。讨论:我们的发现不仅为生成智能体的记忆检索引入了新的视角和方法,而且扩展了llm在各种人工智能智能体应用程序的内存管理中的效用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Enhancing memory retrieval in generative agents through LLM-trained cross attention networks.

Introduction: The surge in the capabilities of large language models (LLMs) has propelled the development of Artificial General Intelligence (AGI), highlighting generative agents as pivotal components for emulating complex AI behaviors. Given the high costs associated with individually training LLMs for each AI agent, there is a critical need for advanced memory retrieval mechanisms to maintain the unique characteristics and memories of individual AI agents.

Methods: In this research, we developed a text-based simulation of a generative agent world, constructing a community with multiple agents and locations in which certain levels of interaction were enabled. Within this framework, we introduced a novel memory retrieval system using an Auxiliary Cross Attention Network (ACAN). This system calculates and ranks attention weights between an agent's current state and stored memories, selecting the most relevant memories for any given situation. In a novel approach, we incorporated LLM assistance, comparing memories retrieved by our model with those extracted using a base method during training, and constructing a novel loss function based on these comparisons to optimize the training process effectively. To our knowledge, this is the first study to utilize LLMs to train a dedicated agent memory retrieval network.

Results: Our empirical evaluations demonstrate that this approach substantially enhances the quality of memory retrieval, thereby increasing the adaptability and behavioral consistency of agents in fluctuating environments.

Discussion: Our findings not only introduce new perspectives and methodologies for memory retrieval in generative agents but also extend the utility of LLMs in memory management across varied AI agent applications.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Frontiers in Psychology
Frontiers in Psychology PSYCHOLOGY, MULTIDISCIPLINARY-
CiteScore
5.30
自引率
13.20%
发文量
7396
审稿时长
14 weeks
期刊介绍: Frontiers in Psychology is the largest journal in its field, publishing rigorously peer-reviewed research across the psychological sciences, from clinical research to cognitive science, from perception to consciousness, from imaging studies to human factors, and from animal cognition to social psychology. Field Chief Editor Axel Cleeremans at the Free University of Brussels is supported by an outstanding Editorial Board of international researchers. This multidisciplinary open-access journal is at the forefront of disseminating and communicating scientific knowledge and impactful discoveries to researchers, academics, clinicians and the public worldwide. The journal publishes the best research across the entire field of psychology. Today, psychological science is becoming increasingly important at all levels of society, from the treatment of clinical disorders to our basic understanding of how the mind works. It is highly interdisciplinary, borrowing questions from philosophy, methods from neuroscience and insights from clinical practice - all in the goal of furthering our grasp of human nature and society, as well as our ability to develop new intervention methods.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信