Giordano De Marzo, Claudio Castellano, David Garcia
{"title":"语言理解是 LLM 社会共识规模的制约因素","authors":"Giordano De Marzo, Claudio Castellano, David Garcia","doi":"arxiv-2409.02822","DOIUrl":null,"url":null,"abstract":"The applications of Large Language Models (LLMs) are going towards\ncollaborative tasks where several agents interact with each other like in an\nLLM society. In such a setting, large groups of LLMs could reach consensus\nabout arbitrary norms for which there is no information supporting one option\nover another, regulating their own behavior in a self-organized way. In human\nsocieties, the ability to reach consensus without institutions has a limit in\nthe cognitive capacities of humans. To understand if a similar phenomenon\ncharacterizes also LLMs, we apply methods from complexity science and\nprinciples from behavioral sciences in a new approach of AI anthropology. We\nfind that LLMs are able to reach consensus in groups and that the opinion\ndynamics of LLMs can be understood with a function parametrized by a majority\nforce coefficient that determines whether consensus is possible. This majority\nforce is stronger for models with higher language understanding capabilities\nand decreases for larger groups, leading to a critical group size beyond which,\nfor a given LLM, consensus is unfeasible. This critical group size grows\nexponentially with the language understanding capabilities of models and for\nthe most advanced models, it can reach an order of magnitude beyond the typical\nsize of informal human groups.","PeriodicalId":501043,"journal":{"name":"arXiv - PHYS - Physics and Society","volume":"8 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Language Understanding as a Constraint on Consensus Size in LLM Societies\",\"authors\":\"Giordano De Marzo, Claudio Castellano, David Garcia\",\"doi\":\"arxiv-2409.02822\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The applications of Large Language Models (LLMs) are going towards\\ncollaborative tasks where several agents interact with each other like in an\\nLLM society. In such a setting, large groups of LLMs could reach consensus\\nabout arbitrary norms for which there is no information supporting one option\\nover another, regulating their own behavior in a self-organized way. In human\\nsocieties, the ability to reach consensus without institutions has a limit in\\nthe cognitive capacities of humans. To understand if a similar phenomenon\\ncharacterizes also LLMs, we apply methods from complexity science and\\nprinciples from behavioral sciences in a new approach of AI anthropology. We\\nfind that LLMs are able to reach consensus in groups and that the opinion\\ndynamics of LLMs can be understood with a function parametrized by a majority\\nforce coefficient that determines whether consensus is possible. This majority\\nforce is stronger for models with higher language understanding capabilities\\nand decreases for larger groups, leading to a critical group size beyond which,\\nfor a given LLM, consensus is unfeasible. This critical group size grows\\nexponentially with the language understanding capabilities of models and for\\nthe most advanced models, it can reach an order of magnitude beyond the typical\\nsize of informal human groups.\",\"PeriodicalId\":501043,\"journal\":{\"name\":\"arXiv - PHYS - Physics and Society\",\"volume\":\"8 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - PHYS - Physics and Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.02822\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - Physics and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.02822","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Language Understanding as a Constraint on Consensus Size in LLM Societies
The applications of Large Language Models (LLMs) are going towards
collaborative tasks where several agents interact with each other like in an
LLM society. In such a setting, large groups of LLMs could reach consensus
about arbitrary norms for which there is no information supporting one option
over another, regulating their own behavior in a self-organized way. In human
societies, the ability to reach consensus without institutions has a limit in
the cognitive capacities of humans. To understand if a similar phenomenon
characterizes also LLMs, we apply methods from complexity science and
principles from behavioral sciences in a new approach of AI anthropology. We
find that LLMs are able to reach consensus in groups and that the opinion
dynamics of LLMs can be understood with a function parametrized by a majority
force coefficient that determines whether consensus is possible. This majority
force is stronger for models with higher language understanding capabilities
and decreases for larger groups, leading to a critical group size beyond which,
for a given LLM, consensus is unfeasible. This critical group size grows
exponentially with the language understanding capabilities of models and for
the most advanced models, it can reach an order of magnitude beyond the typical
size of informal human groups.