Steven T Piantadosi, Dyana C Y Muller, Joshua S Rule, Karthikeya Kaushik, Mark Gorenstein, Elena R Leib, Emily Sanford
{"title":"为什么概念(可能)是向量?","authors":"Steven T Piantadosi, Dyana C Y Muller, Joshua S Rule, Karthikeya Kaushik, Mark Gorenstein, Elena R Leib, Emily Sanford","doi":"10.1016/j.tics.2024.06.011","DOIUrl":null,"url":null,"abstract":"<p><p>For decades, cognitive scientists have debated what kind of representation might characterize human concepts. Whatever the format of the representation, it must allow for the computation of varied properties, including similarities, features, categories, definitions, and relations. It must also support the development of theories, ad hoc categories, and knowledge of procedures. Here, we discuss why vector-based representations provide a compelling account that can meet all these needs while being plausibly encoded into neural architectures. This view has become especially promising with recent advances in both large language models and vector symbolic architectures. These innovations show how vectors can handle many properties traditionally thought to be out of reach for neural models, including compositionality, definitions, structures, and symbolic computational processes.</p>","PeriodicalId":49417,"journal":{"name":"Trends in Cognitive Sciences","volume":null,"pages":null},"PeriodicalIF":16.7000,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Why concepts are (probably) vectors.\",\"authors\":\"Steven T Piantadosi, Dyana C Y Muller, Joshua S Rule, Karthikeya Kaushik, Mark Gorenstein, Elena R Leib, Emily Sanford\",\"doi\":\"10.1016/j.tics.2024.06.011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>For decades, cognitive scientists have debated what kind of representation might characterize human concepts. Whatever the format of the representation, it must allow for the computation of varied properties, including similarities, features, categories, definitions, and relations. It must also support the development of theories, ad hoc categories, and knowledge of procedures. Here, we discuss why vector-based representations provide a compelling account that can meet all these needs while being plausibly encoded into neural architectures. This view has become especially promising with recent advances in both large language models and vector symbolic architectures. These innovations show how vectors can handle many properties traditionally thought to be out of reach for neural models, including compositionality, definitions, structures, and symbolic computational processes.</p>\",\"PeriodicalId\":49417,\"journal\":{\"name\":\"Trends in Cognitive Sciences\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":16.7000,\"publicationDate\":\"2024-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Trends in Cognitive Sciences\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1016/j.tics.2024.06.011\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/8/7 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"BEHAVIORAL SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Trends in Cognitive Sciences","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1016/j.tics.2024.06.011","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/8/7 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
For decades, cognitive scientists have debated what kind of representation might characterize human concepts. Whatever the format of the representation, it must allow for the computation of varied properties, including similarities, features, categories, definitions, and relations. It must also support the development of theories, ad hoc categories, and knowledge of procedures. Here, we discuss why vector-based representations provide a compelling account that can meet all these needs while being plausibly encoded into neural architectures. This view has become especially promising with recent advances in both large language models and vector symbolic architectures. These innovations show how vectors can handle many properties traditionally thought to be out of reach for neural models, including compositionality, definitions, structures, and symbolic computational processes.
期刊介绍:
Essential reading for those working directly in the cognitive sciences or in related specialist areas, Trends in Cognitive Sciences provides an instant overview of current thinking for scientists, students and teachers who want to keep up with the latest developments in the cognitive sciences. The journal brings together research in psychology, artificial intelligence, linguistics, philosophy, computer science and neuroscience. Trends in Cognitive Sciences provides a platform for the interaction of these disciplines and the evolution of cognitive science as an independent field of study.