Xavier Sierra-Canto, Francisco Madera-Ramirez, Víctor Uc Cetina
{"title":"Parallel Training of a Back-Propagation Neural Network Using CUDA","authors":"Xavier Sierra-Canto, Francisco Madera-Ramirez, Víctor Uc Cetina","doi":"10.1109/ICMLA.2010.52","DOIUrl":null,"url":null,"abstract":"The Artificial Neural Networks (ANN) training represents a time-consuming process in machine learning systems. In this work we provide an implementation of the back-propagation algorithm on CUDA, a parallel computing architecture developed by NVIDIA. Using CUBLAS, a CUDA implementation of the Basic Linear Algebra Subprograms library (BLAS), the process is simplified, however, the use of kernels was necessary since CUBLAS does not have all the required operations. The implementation was tested with two standard benchmark data sets and the results show that the parallel training algorithm runs 63 times faster than its sequential version.","PeriodicalId":336514,"journal":{"name":"2010 Ninth International Conference on Machine Learning and Applications","volume":"86 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"58","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 Ninth International Conference on Machine Learning and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA.2010.52","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 58
Abstract
The Artificial Neural Networks (ANN) training represents a time-consuming process in machine learning systems. In this work we provide an implementation of the back-propagation algorithm on CUDA, a parallel computing architecture developed by NVIDIA. Using CUBLAS, a CUDA implementation of the Basic Linear Algebra Subprograms library (BLAS), the process is simplified, however, the use of kernels was necessary since CUBLAS does not have all the required operations. The implementation was tested with two standard benchmark data sets and the results show that the parallel training algorithm runs 63 times faster than its sequential version.