{"title":"平行分布梯度下降和上升方法","authors":"Yoshiyasu Takefuji","doi":"10.1109/IJCNN.1989.118349","DOIUrl":null,"url":null,"abstract":"Summary form only given, as follows. A parallel distributed processing architecture called an entropy machine (EM) is proposed. This machine, which is based on an artificial neural network composed of massive neurons and interconnections, is used for solving a variety of NP-complete optimization problems. The EM performs the parallel distributed gradient descent method or gradient ascent method to search for minima or maxima.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"52 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1989-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Parallel distributed gradient descent and ascent methods\",\"authors\":\"Yoshiyasu Takefuji\",\"doi\":\"10.1109/IJCNN.1989.118349\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Summary form only given, as follows. A parallel distributed processing architecture called an entropy machine (EM) is proposed. This machine, which is based on an artificial neural network composed of massive neurons and interconnections, is used for solving a variety of NP-complete optimization problems. The EM performs the parallel distributed gradient descent method or gradient ascent method to search for minima or maxima.<<ETX>>\",\"PeriodicalId\":199877,\"journal\":{\"name\":\"International 1989 Joint Conference on Neural Networks\",\"volume\":\"52 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1989-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International 1989 Joint Conference on Neural Networks\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.1989.118349\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International 1989 Joint Conference on Neural Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.1989.118349","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Parallel distributed gradient descent and ascent methods
Summary form only given, as follows. A parallel distributed processing architecture called an entropy machine (EM) is proposed. This machine, which is based on an artificial neural network composed of massive neurons and interconnections, is used for solving a variety of NP-complete optimization problems. The EM performs the parallel distributed gradient descent method or gradient ascent method to search for minima or maxima.<>