{"title":"基于子符号神经网络的自然语言处理","authors":"R. Miikkulainen","doi":"10.1201/9780367813239-8","DOIUrl":null,"url":null,"abstract":"Natural language processing appears on the surface to be a strongly symbolic activity. Words are symbols that stand for objects and concepts in the real world, and they are put together into sentences that obey well-speci ed grammar rules. It is no surprise that for several decades natural language processing research has been dominated by the symbolic approach. Linguists have focused on describing language systems based on versions of the Universal Grammar. Arti cial Intelligence researchers have built large programs where linguistic and world knowledge is expressed in symbolic structures, usually in LISP. Relatively little attention has been paid to various cognitive e ects in language processing. Human language users perform di erently from their linguistic competence, that is, from their knowledge of how to communicate correctly using language. Some linguistic structures (such as deep embeddings) are harder to deal with than others. People make mistakes when they speak, but fortunately it is not that hard to understand language that is ungrammatical or cluttered with errors. Linguistic and symbolic arti cial intelligence theories have little to say about where such e ects come from. Yet if one wants to build machines that would communicate naturally with people, it is important to understand and model cognitive e ects in natural language processing.","PeriodicalId":285190,"journal":{"name":"Neural Network Perspectives on Cognition and Adaptive Robotics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Natural Language Processing with Subsymbolic Neural Networks\",\"authors\":\"R. Miikkulainen\",\"doi\":\"10.1201/9780367813239-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Natural language processing appears on the surface to be a strongly symbolic activity. Words are symbols that stand for objects and concepts in the real world, and they are put together into sentences that obey well-speci ed grammar rules. It is no surprise that for several decades natural language processing research has been dominated by the symbolic approach. Linguists have focused on describing language systems based on versions of the Universal Grammar. Arti cial Intelligence researchers have built large programs where linguistic and world knowledge is expressed in symbolic structures, usually in LISP. Relatively little attention has been paid to various cognitive e ects in language processing. Human language users perform di erently from their linguistic competence, that is, from their knowledge of how to communicate correctly using language. Some linguistic structures (such as deep embeddings) are harder to deal with than others. People make mistakes when they speak, but fortunately it is not that hard to understand language that is ungrammatical or cluttered with errors. Linguistic and symbolic arti cial intelligence theories have little to say about where such e ects come from. Yet if one wants to build machines that would communicate naturally with people, it is important to understand and model cognitive e ects in natural language processing.\",\"PeriodicalId\":285190,\"journal\":{\"name\":\"Neural Network Perspectives on Cognition and Adaptive Robotics\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Network Perspectives on Cognition and Adaptive Robotics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1201/9780367813239-8\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Network Perspectives on Cognition and Adaptive Robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1201/9780367813239-8","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Natural Language Processing with Subsymbolic Neural Networks
Natural language processing appears on the surface to be a strongly symbolic activity. Words are symbols that stand for objects and concepts in the real world, and they are put together into sentences that obey well-speci ed grammar rules. It is no surprise that for several decades natural language processing research has been dominated by the symbolic approach. Linguists have focused on describing language systems based on versions of the Universal Grammar. Arti cial Intelligence researchers have built large programs where linguistic and world knowledge is expressed in symbolic structures, usually in LISP. Relatively little attention has been paid to various cognitive e ects in language processing. Human language users perform di erently from their linguistic competence, that is, from their knowledge of how to communicate correctly using language. Some linguistic structures (such as deep embeddings) are harder to deal with than others. People make mistakes when they speak, but fortunately it is not that hard to understand language that is ungrammatical or cluttered with errors. Linguistic and symbolic arti cial intelligence theories have little to say about where such e ects come from. Yet if one wants to build machines that would communicate naturally with people, it is important to understand and model cognitive e ects in natural language processing.