{"title":"Deep learning on Sleptsov nets","authors":"T. Shmeleva, J. Owsinski, A. A. Lawan","doi":"10.1080/17445760.2021.1945055","DOIUrl":null,"url":null,"abstract":"Sleptsov nets are applied as a uniform language to specify models of unconventional computations and artificial intelligence systems. A technique for specification of neural networks, including multidimensional and multilayer networks of deep learning approach, using Sleptsov nets, is shown; the ways of specifying basic activation functions by Sleptsov net are discussed, the threshold and sigmoid functions implemented. A methodology of training neural networks is presented with the loss function minimisation, based on a run of a pair of interacting Sleptsov nets, the first net implementing the neural network based on data flow approach, while the second net solves the optimisation task by adjusting the weights of the first net by the gradient descend method. The optimising net uses the earlier developed technology of programming in Sleptsov nets with reverse control flow and the subnet call technique. Real numbers and arrays are represented as markings of a single place of a Sleptsov net. Hyperperformance is achieved because of the possibility of implementing mass parallel computations.","PeriodicalId":45411,"journal":{"name":"International Journal of Parallel Emergent and Distributed Systems","volume":"36 1","pages":"535 - 548"},"PeriodicalIF":0.6000,"publicationDate":"2021-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/17445760.2021.1945055","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Parallel Emergent and Distributed Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/17445760.2021.1945055","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 1
Abstract
Sleptsov nets are applied as a uniform language to specify models of unconventional computations and artificial intelligence systems. A technique for specification of neural networks, including multidimensional and multilayer networks of deep learning approach, using Sleptsov nets, is shown; the ways of specifying basic activation functions by Sleptsov net are discussed, the threshold and sigmoid functions implemented. A methodology of training neural networks is presented with the loss function minimisation, based on a run of a pair of interacting Sleptsov nets, the first net implementing the neural network based on data flow approach, while the second net solves the optimisation task by adjusting the weights of the first net by the gradient descend method. The optimising net uses the earlier developed technology of programming in Sleptsov nets with reverse control flow and the subnet call technique. Real numbers and arrays are represented as markings of a single place of a Sleptsov net. Hyperperformance is achieved because of the possibility of implementing mass parallel computations.