{"title":"大脑、思维和机器中的语言","authors":"Greta Tuckute, Nancy Kanwisher, Evelina Fedorenko","doi":"10.1146/annurev-neuro-120623-101142","DOIUrl":null,"url":null,"abstract":"It has long been argued that only humans could produce and understand language. But now, for the first time, artificial language models (LMs) achieve this feat. Here we survey the new purchase LMs are providing on the question of how language is implemented in the brain. We discuss why, a priori, LMs might be expected to share similarities with the human language system. We then summarize evidence that LMs represent linguistic information similarly enough to humans to enable relatively accurate brain encoding and decoding during language processing. Finally, we examine which LM properties—their architecture, task performance, or training—are critical for capturing human neural responses to language and review studies using LMs as in silico model organisms for testing hypotheses about language. These ongoing investigations bring us closer to understanding the representations and processes that underlie our ability to comprehend sentences and express thoughts in language.","PeriodicalId":8008,"journal":{"name":"Annual review of neuroscience","volume":"5 1","pages":""},"PeriodicalIF":12.1000,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Language in Brains, Minds, and Machines\",\"authors\":\"Greta Tuckute, Nancy Kanwisher, Evelina Fedorenko\",\"doi\":\"10.1146/annurev-neuro-120623-101142\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"It has long been argued that only humans could produce and understand language. But now, for the first time, artificial language models (LMs) achieve this feat. Here we survey the new purchase LMs are providing on the question of how language is implemented in the brain. We discuss why, a priori, LMs might be expected to share similarities with the human language system. We then summarize evidence that LMs represent linguistic information similarly enough to humans to enable relatively accurate brain encoding and decoding during language processing. Finally, we examine which LM properties—their architecture, task performance, or training—are critical for capturing human neural responses to language and review studies using LMs as in silico model organisms for testing hypotheses about language. These ongoing investigations bring us closer to understanding the representations and processes that underlie our ability to comprehend sentences and express thoughts in language.\",\"PeriodicalId\":8008,\"journal\":{\"name\":\"Annual review of neuroscience\",\"volume\":\"5 1\",\"pages\":\"\"},\"PeriodicalIF\":12.1000,\"publicationDate\":\"2024-04-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Annual review of neuroscience\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1146/annurev-neuro-120623-101142\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"NEUROSCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annual review of neuroscience","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1146/annurev-neuro-120623-101142","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
It has long been argued that only humans could produce and understand language. But now, for the first time, artificial language models (LMs) achieve this feat. Here we survey the new purchase LMs are providing on the question of how language is implemented in the brain. We discuss why, a priori, LMs might be expected to share similarities with the human language system. We then summarize evidence that LMs represent linguistic information similarly enough to humans to enable relatively accurate brain encoding and decoding during language processing. Finally, we examine which LM properties—their architecture, task performance, or training—are critical for capturing human neural responses to language and review studies using LMs as in silico model organisms for testing hypotheses about language. These ongoing investigations bring us closer to understanding the representations and processes that underlie our ability to comprehend sentences and express thoughts in language.
期刊介绍:
The Annual Review of Neuroscience is a well-established and comprehensive journal in the field of neuroscience, with a rich history and a commitment to open access and scholarly communication. The journal has been in publication since 1978, providing a long-standing source of authoritative reviews in neuroscience.
The Annual Review of Neuroscience encompasses a wide range of topics within neuroscience, including but not limited to: Molecular and cellular neuroscience, Neurogenetics, Developmental neuroscience, Neural plasticity and repair, Systems neuroscience, Cognitive neuroscience, Behavioral neuroscience, Neurobiology of disease. Occasionally, the journal also features reviews on the history of neuroscience and ethical considerations within the field.