{"title":"内容可寻址网络","authors":"S. A. Brodsky","doi":"10.1364/oam.1992.mbb6","DOIUrl":null,"url":null,"abstract":"Content addressable networks (CAN) are a family of network learning algorithms for supervised, tutored, and self-organized systems based on binary weights and parallel binary computations. CAN networks directly address the implementation costs associated with high precision weight storage and computation. CAN networks are efficient learning systems with capabilities comparable to analog networks. Supervised CAN systems use error information for weight corrections in a manner analogous to that of backpropagation gradient descent. The tutored CAN network model uses \"yes\" or \"no\" feedback as a guide for forming associative categories. The self-organized model derives corrections internally to form recall categories in an adaptive resonance theory style network. The CAN algorithms derive advantages from their intrinsic binary nature and efficient implementation in both optical and VLSI computing systems. CAN solutions for quantized problems may be used directly to initialize analog backpropagation networks. The CAN network has been implemented optically, with optical computation of both recall and learning. Development of supervised CAN networks in VLSI with learning on-chip continues.","PeriodicalId":406038,"journal":{"name":"Optical Society of America Annual Meeting","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1996-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":"{\"title\":\"Content addressable networks\",\"authors\":\"S. A. Brodsky\",\"doi\":\"10.1364/oam.1992.mbb6\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Content addressable networks (CAN) are a family of network learning algorithms for supervised, tutored, and self-organized systems based on binary weights and parallel binary computations. CAN networks directly address the implementation costs associated with high precision weight storage and computation. CAN networks are efficient learning systems with capabilities comparable to analog networks. Supervised CAN systems use error information for weight corrections in a manner analogous to that of backpropagation gradient descent. The tutored CAN network model uses \\\"yes\\\" or \\\"no\\\" feedback as a guide for forming associative categories. The self-organized model derives corrections internally to form recall categories in an adaptive resonance theory style network. The CAN algorithms derive advantages from their intrinsic binary nature and efficient implementation in both optical and VLSI computing systems. CAN solutions for quantized problems may be used directly to initialize analog backpropagation networks. The CAN network has been implemented optically, with optical computation of both recall and learning. Development of supervised CAN networks in VLSI with learning on-chip continues.\",\"PeriodicalId\":406038,\"journal\":{\"name\":\"Optical Society of America Annual Meeting\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1996-10-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"14\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Optical Society of America Annual Meeting\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1364/oam.1992.mbb6\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optical Society of America Annual Meeting","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1364/oam.1992.mbb6","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Content addressable networks (CAN) are a family of network learning algorithms for supervised, tutored, and self-organized systems based on binary weights and parallel binary computations. CAN networks directly address the implementation costs associated with high precision weight storage and computation. CAN networks are efficient learning systems with capabilities comparable to analog networks. Supervised CAN systems use error information for weight corrections in a manner analogous to that of backpropagation gradient descent. The tutored CAN network model uses "yes" or "no" feedback as a guide for forming associative categories. The self-organized model derives corrections internally to form recall categories in an adaptive resonance theory style network. The CAN algorithms derive advantages from their intrinsic binary nature and efficient implementation in both optical and VLSI computing systems. CAN solutions for quantized problems may be used directly to initialize analog backpropagation networks. The CAN network has been implemented optically, with optical computation of both recall and learning. Development of supervised CAN networks in VLSI with learning on-chip continues.