{"title":"弱标记数据的随机神经形态学习机","authors":"E. Neftci","doi":"10.1109/ICCD.2016.7753355","DOIUrl":null,"url":null,"abstract":"At learning tasks where humans typically outperform computers, neuromorphic learning machines can have potential advantages in learning in terms of power and complexity compared to mainstream technologies. Here, we present Synaptic Sampling Machines (S2M), a class of stochastic neural networks that use stochasticity at the connections (synapses) to implement energy efficient semi- and unsupervised learning for weakly or unlabeled data. Stochastic synapses play the dual role of a regularizer during learning and a mechanism for implementing stochasticity in neural networks. We show a S2M network architecture that is well suited for a dedicated digital implementation, that is potentially hundredfold more energy efficient compared to equivalent algorithms operating on GPUs.","PeriodicalId":297899,"journal":{"name":"2016 IEEE 34th International Conference on Computer Design (ICCD)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Stochastic neuromorphic learning machines for weakly labeled data\",\"authors\":\"E. Neftci\",\"doi\":\"10.1109/ICCD.2016.7753355\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"At learning tasks where humans typically outperform computers, neuromorphic learning machines can have potential advantages in learning in terms of power and complexity compared to mainstream technologies. Here, we present Synaptic Sampling Machines (S2M), a class of stochastic neural networks that use stochasticity at the connections (synapses) to implement energy efficient semi- and unsupervised learning for weakly or unlabeled data. Stochastic synapses play the dual role of a regularizer during learning and a mechanism for implementing stochasticity in neural networks. We show a S2M network architecture that is well suited for a dedicated digital implementation, that is potentially hundredfold more energy efficient compared to equivalent algorithms operating on GPUs.\",\"PeriodicalId\":297899,\"journal\":{\"name\":\"2016 IEEE 34th International Conference on Computer Design (ICCD)\",\"volume\":\"125 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 IEEE 34th International Conference on Computer Design (ICCD)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCD.2016.7753355\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE 34th International Conference on Computer Design (ICCD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCD.2016.7753355","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Stochastic neuromorphic learning machines for weakly labeled data
At learning tasks where humans typically outperform computers, neuromorphic learning machines can have potential advantages in learning in terms of power and complexity compared to mainstream technologies. Here, we present Synaptic Sampling Machines (S2M), a class of stochastic neural networks that use stochasticity at the connections (synapses) to implement energy efficient semi- and unsupervised learning for weakly or unlabeled data. Stochastic synapses play the dual role of a regularizer during learning and a mechanism for implementing stochasticity in neural networks. We show a S2M network architecture that is well suited for a dedicated digital implementation, that is potentially hundredfold more energy efficient compared to equivalent algorithms operating on GPUs.