Mycal Tucker, Julie Shah, Roger Levy, Noga Zaslavsky
{"title":"从实用性、信息性和复杂性走向类人紧急沟通。","authors":"Mycal Tucker, Julie Shah, Roger Levy, Noga Zaslavsky","doi":"10.1162/opmi_a_00188","DOIUrl":null,"url":null,"abstract":"<p><p>Two prominent, yet contrasting, theoretical views are available to characterize the underlying drivers of language evolution: on the one hand, task-specific utility maximization; on the other hand, task-agnostic communicative efficiency. The latter has recently been grounded in an information-theoretic tradeoff between communicative complexity and informativeness, known as the Information Bottleneck (IB) principle. Here, we integrate these two views and propose an information-constrained emergent communication framework that trades off utility, informativeness, and complexity. To train agents within our framework, we develop a method, called Vector-Quantized Variational Information Bottleneck (VQ-VIB), that allows agents to interact using information-constrained discrete communication embedded in a continuous vector space. We test this approach in three domains and show that pressure for informativeness facilitates faster learning and better generalization to novel domains. At the same time, limiting complexity yields better alignment with actual human languages. Lastly, we find that VQ-VIB outperforms previously proposed emergent communication methods; we posit that this is due to the semantically-meaningful communication embedding space that VQ-VIB affords. Overall, our work demonstrates the role of cognitively-motivated optimality principles in inducing aspects of human-like communication among artificial agents.</p>","PeriodicalId":32558,"journal":{"name":"Open Mind","volume":"9 ","pages":"418-451"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11984795/pdf/","citationCount":"0","resultStr":"{\"title\":\"Towards Human-Like Emergent Communication via Utility, Informativeness, and Complexity.\",\"authors\":\"Mycal Tucker, Julie Shah, Roger Levy, Noga Zaslavsky\",\"doi\":\"10.1162/opmi_a_00188\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Two prominent, yet contrasting, theoretical views are available to characterize the underlying drivers of language evolution: on the one hand, task-specific utility maximization; on the other hand, task-agnostic communicative efficiency. The latter has recently been grounded in an information-theoretic tradeoff between communicative complexity and informativeness, known as the Information Bottleneck (IB) principle. Here, we integrate these two views and propose an information-constrained emergent communication framework that trades off utility, informativeness, and complexity. To train agents within our framework, we develop a method, called Vector-Quantized Variational Information Bottleneck (VQ-VIB), that allows agents to interact using information-constrained discrete communication embedded in a continuous vector space. We test this approach in three domains and show that pressure for informativeness facilitates faster learning and better generalization to novel domains. At the same time, limiting complexity yields better alignment with actual human languages. Lastly, we find that VQ-VIB outperforms previously proposed emergent communication methods; we posit that this is due to the semantically-meaningful communication embedding space that VQ-VIB affords. Overall, our work demonstrates the role of cognitively-motivated optimality principles in inducing aspects of human-like communication among artificial agents.</p>\",\"PeriodicalId\":32558,\"journal\":{\"name\":\"Open Mind\",\"volume\":\"9 \",\"pages\":\"418-451\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-04-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11984795/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Open Mind\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1162/opmi_a_00188\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Open Mind","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1162/opmi_a_00188","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
Towards Human-Like Emergent Communication via Utility, Informativeness, and Complexity.
Two prominent, yet contrasting, theoretical views are available to characterize the underlying drivers of language evolution: on the one hand, task-specific utility maximization; on the other hand, task-agnostic communicative efficiency. The latter has recently been grounded in an information-theoretic tradeoff between communicative complexity and informativeness, known as the Information Bottleneck (IB) principle. Here, we integrate these two views and propose an information-constrained emergent communication framework that trades off utility, informativeness, and complexity. To train agents within our framework, we develop a method, called Vector-Quantized Variational Information Bottleneck (VQ-VIB), that allows agents to interact using information-constrained discrete communication embedded in a continuous vector space. We test this approach in three domains and show that pressure for informativeness facilitates faster learning and better generalization to novel domains. At the same time, limiting complexity yields better alignment with actual human languages. Lastly, we find that VQ-VIB outperforms previously proposed emergent communication methods; we posit that this is due to the semantically-meaningful communication embedding space that VQ-VIB affords. Overall, our work demonstrates the role of cognitively-motivated optimality principles in inducing aspects of human-like communication among artificial agents.