{"title":"高效变压器自注意硬件的神经形态原理。","authors":"Nathan Leroux, Jan Finkbeiner, Emre Neftci","doi":"10.1038/s43588-025-00868-9","DOIUrl":null,"url":null,"abstract":"Strong barriers remain between neuromorphic engineering and machine learning, especially with regard to recent large language models (LLMs) and transformers. This Comment makes the case that neuromorphic engineering may hold the keys to more efficient inference with transformer-like models.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"5 9","pages":"708-710"},"PeriodicalIF":18.3000,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Neuromorphic principles in self-attention hardware for efficient transformers\",\"authors\":\"Nathan Leroux, Jan Finkbeiner, Emre Neftci\",\"doi\":\"10.1038/s43588-025-00868-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Strong barriers remain between neuromorphic engineering and machine learning, especially with regard to recent large language models (LLMs) and transformers. This Comment makes the case that neuromorphic engineering may hold the keys to more efficient inference with transformer-like models.\",\"PeriodicalId\":74246,\"journal\":{\"name\":\"Nature computational science\",\"volume\":\"5 9\",\"pages\":\"708-710\"},\"PeriodicalIF\":18.3000,\"publicationDate\":\"2025-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Nature computational science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.nature.com/articles/s43588-025-00868-9\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nature computational science","FirstCategoryId":"1085","ListUrlMain":"https://www.nature.com/articles/s43588-025-00868-9","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
Neuromorphic principles in self-attention hardware for efficient transformers
Strong barriers remain between neuromorphic engineering and machine learning, especially with regard to recent large language models (LLMs) and transformers. This Comment makes the case that neuromorphic engineering may hold the keys to more efficient inference with transformer-like models.