{"title":"基于前向状态变换和后向损失变换的神经网络","authors":"Bart Jacobs , David Sprunger","doi":"10.1016/j.entcs.2019.09.009","DOIUrl":null,"url":null,"abstract":"<div><p>This article studies (multilayer perceptron) neural networks with an emphasis on the transformations involved — both forward and backward — in order to develop a semantic/logical perspective that is in line with standard program semantics. The common two-pass neural network training algorithms make this viewpoint particularly fitting. In the forward direction, neural networks act as state transformers, using Kleisli composition for the multiset monad — for the linear parts of network layers. In the reverse direction, however, neural networks change losses of outputs to losses of inputs, thereby acting like a (real-valued) predicate transformer. In this way, backpropagation is functorial by construction, as shown in other works recently. We illustrate this perspective by training a simple instance of a neural network.</p></div>","PeriodicalId":38770,"journal":{"name":"Electronic Notes in Theoretical Computer Science","volume":"347 ","pages":"Pages 161-177"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.entcs.2019.09.009","citationCount":"4","resultStr":"{\"title\":\"Neural Nets via Forward State Transformation and Backward Loss Transformation\",\"authors\":\"Bart Jacobs , David Sprunger\",\"doi\":\"10.1016/j.entcs.2019.09.009\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>This article studies (multilayer perceptron) neural networks with an emphasis on the transformations involved — both forward and backward — in order to develop a semantic/logical perspective that is in line with standard program semantics. The common two-pass neural network training algorithms make this viewpoint particularly fitting. In the forward direction, neural networks act as state transformers, using Kleisli composition for the multiset monad — for the linear parts of network layers. In the reverse direction, however, neural networks change losses of outputs to losses of inputs, thereby acting like a (real-valued) predicate transformer. In this way, backpropagation is functorial by construction, as shown in other works recently. We illustrate this perspective by training a simple instance of a neural network.</p></div>\",\"PeriodicalId\":38770,\"journal\":{\"name\":\"Electronic Notes in Theoretical Computer Science\",\"volume\":\"347 \",\"pages\":\"Pages 161-177\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1016/j.entcs.2019.09.009\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Electronic Notes in Theoretical Computer Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1571066119301252\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Electronic Notes in Theoretical Computer Science","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1571066119301252","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Computer Science","Score":null,"Total":0}
Neural Nets via Forward State Transformation and Backward Loss Transformation
This article studies (multilayer perceptron) neural networks with an emphasis on the transformations involved — both forward and backward — in order to develop a semantic/logical perspective that is in line with standard program semantics. The common two-pass neural network training algorithms make this viewpoint particularly fitting. In the forward direction, neural networks act as state transformers, using Kleisli composition for the multiset monad — for the linear parts of network layers. In the reverse direction, however, neural networks change losses of outputs to losses of inputs, thereby acting like a (real-valued) predicate transformer. In this way, backpropagation is functorial by construction, as shown in other works recently. We illustrate this perspective by training a simple instance of a neural network.
期刊介绍:
ENTCS is a venue for the rapid electronic publication of the proceedings of conferences, of lecture notes, monographs and other similar material for which quick publication and the availability on the electronic media is appropriate. Organizers of conferences whose proceedings appear in ENTCS, and authors of other material appearing as a volume in the series are allowed to make hard copies of the relevant volume for limited distribution. For example, conference proceedings may be distributed to participants at the meeting, and lecture notes can be distributed to those taking a course based on the material in the volume.