{"title":"column_多旋转编码算法的架构","authors":"M. Neschen","doi":"10.1109/HICSS.1992.183156","DOIUrl":null,"url":null,"abstract":"In order to improve performance when large systems of simple discrete variables are simulated on general-purpose-computers, multi-spin-coding algorithms have been developed. In this paper, a new architecture is proposed which exploits that kind of SIMD parallelism to a high degree using a large array of cheap memory chips which is directly connected to an army of bit-sequential processors. As each processor can perform different operations simultaneously on the incoming bits, an SIMD*MISD architecture for bit operations results. Many applications including lattice-oriented spin simulations and attractor neural network are presented and discussed for efficiency on this structure. As neural network simulations can be largely accelerated by restricting operations to flipped spins, special hardware is suggested which allows the generation of their indices at a maximum rate.<<ETX>>","PeriodicalId":103288,"journal":{"name":"Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1992-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"COLUMNUS-an architecture for multi-spin-coding algorithms\",\"authors\":\"M. Neschen\",\"doi\":\"10.1109/HICSS.1992.183156\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In order to improve performance when large systems of simple discrete variables are simulated on general-purpose-computers, multi-spin-coding algorithms have been developed. In this paper, a new architecture is proposed which exploits that kind of SIMD parallelism to a high degree using a large array of cheap memory chips which is directly connected to an army of bit-sequential processors. As each processor can perform different operations simultaneously on the incoming bits, an SIMD*MISD architecture for bit operations results. Many applications including lattice-oriented spin simulations and attractor neural network are presented and discussed for efficiency on this structure. As neural network simulations can be largely accelerated by restricting operations to flipped spins, special hardware is suggested which allows the generation of their indices at a maximum rate.<<ETX>>\",\"PeriodicalId\":103288,\"journal\":{\"name\":\"Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences\",\"volume\":\"26 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1992-01-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HICSS.1992.183156\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HICSS.1992.183156","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
COLUMNUS-an architecture for multi-spin-coding algorithms
In order to improve performance when large systems of simple discrete variables are simulated on general-purpose-computers, multi-spin-coding algorithms have been developed. In this paper, a new architecture is proposed which exploits that kind of SIMD parallelism to a high degree using a large array of cheap memory chips which is directly connected to an army of bit-sequential processors. As each processor can perform different operations simultaneously on the incoming bits, an SIMD*MISD architecture for bit operations results. Many applications including lattice-oriented spin simulations and attractor neural network are presented and discussed for efficiency on this structure. As neural network simulations can be largely accelerated by restricting operations to flipped spins, special hardware is suggested which allows the generation of their indices at a maximum rate.<>