LLMs and generative agent-based models for complex systems research

IF 13.7 1区 生物学 Q1 BIOLOGY
Yikang Lu , Alberto Aleta , Chunpeng Du , Lei Shi , Yamir Moreno
{"title":"LLMs and generative agent-based models for complex systems research","authors":"Yikang Lu ,&nbsp;Alberto Aleta ,&nbsp;Chunpeng Du ,&nbsp;Lei Shi ,&nbsp;Yamir Moreno","doi":"10.1016/j.plrev.2024.10.013","DOIUrl":null,"url":null,"abstract":"<div><div>The advent of Large Language Models (LLMs) offers to transform research across natural and social sciences, offering new paradigms for understanding complex systems. In particular, Generative Agent-Based Models (GABMs), which integrate LLMs to simulate human behavior, have attracted increasing public attention due to their potential to model complex interactions in a wide range of artificial environments. This paper briefly reviews the disruptive role LLMs are playing in fields such as network science, evolutionary game theory, social dynamics, and epidemic modeling. We assess recent advancements, including the use of LLMs for predicting social behavior, enhancing cooperation in game theory, and modeling disease propagation. The findings demonstrate that LLMs can reproduce human-like behaviors, such as fairness, cooperation, and social norm adherence, while also introducing unique advantages such as cost efficiency, scalability, and ethical simplification. However, the results reveal inconsistencies in their behavior tied to prompt sensitivity, hallucinations and even the model characteristics, pointing to challenges in controlling these AI-driven agents. Despite their potential, the effective integration of LLMs into decision-making processes —whether in government, societal, or individual contexts— requires addressing biases, prompt design challenges, and understanding the dynamics of human-machine interactions. Future research must refine these models, standardize methodologies, and explore the emergence of new cooperative behaviors as LLMs increasingly interact with humans and each other, potentially transforming how decisions are made across various systems.</div></div>","PeriodicalId":403,"journal":{"name":"Physics of Life Reviews","volume":"51 ","pages":"Pages 283-293"},"PeriodicalIF":13.7000,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Physics of Life Reviews","FirstCategoryId":"99","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1571064524001386","RegionNum":1,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BIOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

The advent of Large Language Models (LLMs) offers to transform research across natural and social sciences, offering new paradigms for understanding complex systems. In particular, Generative Agent-Based Models (GABMs), which integrate LLMs to simulate human behavior, have attracted increasing public attention due to their potential to model complex interactions in a wide range of artificial environments. This paper briefly reviews the disruptive role LLMs are playing in fields such as network science, evolutionary game theory, social dynamics, and epidemic modeling. We assess recent advancements, including the use of LLMs for predicting social behavior, enhancing cooperation in game theory, and modeling disease propagation. The findings demonstrate that LLMs can reproduce human-like behaviors, such as fairness, cooperation, and social norm adherence, while also introducing unique advantages such as cost efficiency, scalability, and ethical simplification. However, the results reveal inconsistencies in their behavior tied to prompt sensitivity, hallucinations and even the model characteristics, pointing to challenges in controlling these AI-driven agents. Despite their potential, the effective integration of LLMs into decision-making processes —whether in government, societal, or individual contexts— requires addressing biases, prompt design challenges, and understanding the dynamics of human-machine interactions. Future research must refine these models, standardize methodologies, and explore the emergence of new cooperative behaviors as LLMs increasingly interact with humans and each other, potentially transforming how decisions are made across various systems.
用于复杂系统研究的 LLM 和基于生成代理的模型
大型语言模型(LLMs)的出现改变了自然科学和社会科学的研究,为理解复杂系统提供了新的范式。特别是基于代理的生成模型(GABMs),它整合了 LLMs 来模拟人类行为,由于其在模拟各种人工环境中复杂互动的潜力,已经吸引了越来越多的公众关注。本文简要回顾了 LLM 在网络科学、进化博弈论、社会动力学和流行病建模等领域发挥的颠覆性作用。我们评估了最近的进展,包括利用 LLM 预测社会行为、增强博弈论中的合作以及疾病传播建模。研究结果表明,LLMs 可以再现类似人类的行为,如公平、合作和遵守社会规范,同时还具有成本效益、可扩展性和伦理简化等独特优势。然而,研究结果表明,它们的行为与提示敏感性、幻觉甚至模型特征有关,存在不一致性,这表明在控制这些人工智能驱动的代理方面存在挑战。尽管 LLMs 潜力巨大,但要将其有效融入决策过程(无论是在政府、社会还是个人环境中),还需要解决偏见、提示设计挑战以及了解人机互动的动态。未来的研究必须完善这些模型,统一方法论,并探索随着 LLMs 与人类和相互之间的互动日益频繁而出现的新合作行为,从而有可能改变各种系统的决策方式。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Physics of Life Reviews
Physics of Life Reviews 生物-生物物理
CiteScore
20.30
自引率
14.50%
发文量
52
审稿时长
8 days
期刊介绍: Physics of Life Reviews, published quarterly, is an international journal dedicated to review articles on the physics of living systems, complex phenomena in biological systems, and related fields including artificial life, robotics, mathematical bio-semiotics, and artificial intelligent systems. Serving as a unifying force across disciplines, the journal explores living systems comprehensively—from molecules to populations, genetics to mind, and artificial systems modeling these phenomena. Inviting reviews from actively engaged researchers, the journal seeks broad, critical, and accessible contributions that address recent progress and sometimes controversial accounts in the field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信