Lyle Muller, Patricia S Churchland, Terrence J Sejnowski
{"title":"Transformers and cortical waves: encoders for pulling in context across time.","authors":"Lyle Muller, Patricia S Churchland, Terrence J Sejnowski","doi":"10.1016/j.tins.2024.08.006","DOIUrl":null,"url":null,"abstract":"<p><p>The capabilities of transformer networks such as ChatGPT and other large language models (LLMs) have captured the world's attention. The crucial computational mechanism underlying their performance relies on transforming a complete input sequence - for example, all the words in a sentence - into a long 'encoding vector' that allows transformers to learn long-range temporal dependencies in naturalistic sequences. Specifically, 'self-attention' applied to this encoding vector enhances temporal context in transformers by computing associations between pairs of words in the input sequence. We suggest that waves of neural activity traveling across single cortical areas, or multiple regions on the whole-brain scale, could implement a similar encoding principle. By encapsulating recent input history into a single spatial pattern at each moment in time, cortical waves may enable a temporal context to be extracted from sequences of sensory inputs, the same computational principle as that used in transformers.</p>","PeriodicalId":23325,"journal":{"name":"Trends in Neurosciences","volume":null,"pages":null},"PeriodicalIF":14.6000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Trends in Neurosciences","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.tins.2024.08.006","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/9/27 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
The capabilities of transformer networks such as ChatGPT and other large language models (LLMs) have captured the world's attention. The crucial computational mechanism underlying their performance relies on transforming a complete input sequence - for example, all the words in a sentence - into a long 'encoding vector' that allows transformers to learn long-range temporal dependencies in naturalistic sequences. Specifically, 'self-attention' applied to this encoding vector enhances temporal context in transformers by computing associations between pairs of words in the input sequence. We suggest that waves of neural activity traveling across single cortical areas, or multiple regions on the whole-brain scale, could implement a similar encoding principle. By encapsulating recent input history into a single spatial pattern at each moment in time, cortical waves may enable a temporal context to be extracted from sequences of sensory inputs, the same computational principle as that used in transformers.
期刊介绍:
For over four decades, Trends in Neurosciences (TINS) has been a prominent source of inspiring reviews and commentaries across all disciplines of neuroscience. TINS is a monthly, peer-reviewed journal, and its articles are curated by the Editor and authored by leading researchers in their respective fields. The journal communicates exciting advances in brain research, serves as a voice for the global neuroscience community, and highlights the contribution of neuroscientific research to medicine and society.