{"title":"使用可训练逻辑网络的模式分类","authors":"B. W. Evans","doi":"10.1109/IJCNN.1989.118599","DOIUrl":null,"url":null,"abstract":"The author describes a new pattern classification algorithm which has the simplicity of the well-known multilinear classifier but is capable of learning patterns through supervised training. This is achieved by replacing the discretely valued logic functions employed in the conventional classifier with continuous extensions. The resulting differentiable relationship between network parameters and outputs permits the use of gradient descent methods to select optimal classifier parameters. This classifier can be implemented as a network whose structure is well suited to highly parallel hardware implementation. Essentially, the same network can be used both to compute weight adjustments and perform classifications, so that the same hardware could be used for both rapid training and classification. The author has applied this classifier to a noisy parity detection problem. The classification error frequency obtained in this example compares favourably with the theoretical lower bound.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Pattern classification using trainable logic networks\",\"authors\":\"B. W. Evans\",\"doi\":\"10.1109/IJCNN.1989.118599\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The author describes a new pattern classification algorithm which has the simplicity of the well-known multilinear classifier but is capable of learning patterns through supervised training. This is achieved by replacing the discretely valued logic functions employed in the conventional classifier with continuous extensions. The resulting differentiable relationship between network parameters and outputs permits the use of gradient descent methods to select optimal classifier parameters. This classifier can be implemented as a network whose structure is well suited to highly parallel hardware implementation. Essentially, the same network can be used both to compute weight adjustments and perform classifications, so that the same hardware could be used for both rapid training and classification. The author has applied this classifier to a noisy parity detection problem. The classification error frequency obtained in this example compares favourably with the theoretical lower bound.<<ETX>>\",\"PeriodicalId\":199877,\"journal\":{\"name\":\"International 1989 Joint Conference on Neural Networks\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International 1989 Joint Conference on Neural Networks\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.1989.118599\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International 1989 Joint Conference on Neural Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.1989.118599","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Pattern classification using trainable logic networks
The author describes a new pattern classification algorithm which has the simplicity of the well-known multilinear classifier but is capable of learning patterns through supervised training. This is achieved by replacing the discretely valued logic functions employed in the conventional classifier with continuous extensions. The resulting differentiable relationship between network parameters and outputs permits the use of gradient descent methods to select optimal classifier parameters. This classifier can be implemented as a network whose structure is well suited to highly parallel hardware implementation. Essentially, the same network can be used both to compute weight adjustments and perform classifications, so that the same hardware could be used for both rapid training and classification. The author has applied this classifier to a noisy parity detection problem. The classification error frequency obtained in this example compares favourably with the theoretical lower bound.<>