{"title":"带有gpu设备的主动PNN方法的并行实现","authors":"A. Wakatani, A. Murakami","doi":"10.1109/ICCE.2013.6487002","DOIUrl":null,"url":null,"abstract":"We implement the Aggressive PNN method on GPUs by using CUDA to generate codebooks for VQ compression and the speedup of up to 4.01 is achieved compared with the tau PNN method on the CPU. Our second method enhances the parallelism by using indirect vectors to reduce idle threads. We also improved the algorithm by about 20% by using indirect vectors.","PeriodicalId":6432,"journal":{"name":"2013 IEEE International Conference on Consumer Electronics (ICCE)","volume":"115 1 1","pages":"518-519"},"PeriodicalIF":0.0000,"publicationDate":"2013-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Parallel implementation of Aggressive PNN method for devices with GPUs\",\"authors\":\"A. Wakatani, A. Murakami\",\"doi\":\"10.1109/ICCE.2013.6487002\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We implement the Aggressive PNN method on GPUs by using CUDA to generate codebooks for VQ compression and the speedup of up to 4.01 is achieved compared with the tau PNN method on the CPU. Our second method enhances the parallelism by using indirect vectors to reduce idle threads. We also improved the algorithm by about 20% by using indirect vectors.\",\"PeriodicalId\":6432,\"journal\":{\"name\":\"2013 IEEE International Conference on Consumer Electronics (ICCE)\",\"volume\":\"115 1 1\",\"pages\":\"518-519\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-03-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2013 IEEE International Conference on Consumer Electronics (ICCE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCE.2013.6487002\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE International Conference on Consumer Electronics (ICCE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCE.2013.6487002","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Parallel implementation of Aggressive PNN method for devices with GPUs
We implement the Aggressive PNN method on GPUs by using CUDA to generate codebooks for VQ compression and the speedup of up to 4.01 is achieved compared with the tau PNN method on the CPU. Our second method enhances the parallelism by using indirect vectors to reduce idle threads. We also improved the algorithm by about 20% by using indirect vectors.