{"title":"在支持cuda的GPU上批量执行欧几里得算法","authors":"Toru Fujita, K. Nakano, Yasuaki Ito","doi":"10.15803/IJNC.6.1_42","DOIUrl":null,"url":null,"abstract":"The bulk execution of a sequential algorithm is to execute it for many different inputs in turn or at the same time. A sequential algorithm is oblivious if the address accessed at each time unit is independent of the input. It is known that the bulk execution of an oblivious sequential algorithm can be implemented to run on a GPU very efficiently. The main purpose of our work is to implement the bulk execution of a Euclidean algorithm computing the GCD (Greatest Common Divisor) of two large numbers in a GPU. We first present a new efficient Euclidean algorithm that we call the Approximate Euclidean algorithm. The idea of the Approximate Euclidean algorithm is to compute an approximation of quotient by just one 64-bit division and to use it for reducing the number of iterations of the Euclidean algorithm. Unfortunately, the Approximate Euclidean algorithm is not oblivious. To show that the bulk execution of the Approximate Euclidean algorithm can be implemented efficiently in the GPU, we introduce a semi-oblivious sequential algorithms, which is almost oblivious. We show that the Approximate Euclidean algorithm can be implemented as a semi-oblivious algorithm. The experimental results show that our parallel implementation of the Approximate Euclidean algorithm for 1024- bit integers running on GeForce GTX Titan X GPU is 90 times faster than the Intel Xeon CPU implementation.Â","PeriodicalId":270166,"journal":{"name":"Int. J. Netw. Comput.","volume":"213 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Bulk execution of Euclidean algorithms on the CUDA-enabled GPU\",\"authors\":\"Toru Fujita, K. Nakano, Yasuaki Ito\",\"doi\":\"10.15803/IJNC.6.1_42\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The bulk execution of a sequential algorithm is to execute it for many different inputs in turn or at the same time. A sequential algorithm is oblivious if the address accessed at each time unit is independent of the input. It is known that the bulk execution of an oblivious sequential algorithm can be implemented to run on a GPU very efficiently. The main purpose of our work is to implement the bulk execution of a Euclidean algorithm computing the GCD (Greatest Common Divisor) of two large numbers in a GPU. We first present a new efficient Euclidean algorithm that we call the Approximate Euclidean algorithm. The idea of the Approximate Euclidean algorithm is to compute an approximation of quotient by just one 64-bit division and to use it for reducing the number of iterations of the Euclidean algorithm. Unfortunately, the Approximate Euclidean algorithm is not oblivious. To show that the bulk execution of the Approximate Euclidean algorithm can be implemented efficiently in the GPU, we introduce a semi-oblivious sequential algorithms, which is almost oblivious. We show that the Approximate Euclidean algorithm can be implemented as a semi-oblivious algorithm. The experimental results show that our parallel implementation of the Approximate Euclidean algorithm for 1024- bit integers running on GeForce GTX Titan X GPU is 90 times faster than the Intel Xeon CPU implementation.Â\",\"PeriodicalId\":270166,\"journal\":{\"name\":\"Int. J. Netw. Comput.\",\"volume\":\"213 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-01-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Int. J. Netw. Comput.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.15803/IJNC.6.1_42\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Netw. Comput.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.15803/IJNC.6.1_42","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
摘要
顺序算法的批量执行是对许多不同的输入依次执行或同时执行。如果在每个时间单元访问的地址与输入无关,则顺序算法是无关的。众所周知,遗忘顺序算法的批量执行可以实现在GPU上非常有效地运行。我们工作的主要目的是实现欧几里得算法在GPU中计算两个大数的GCD(最大公约数)的批量执行。我们首先提出了一种新的高效欧几里得算法,我们称之为近似欧几里得算法。近似欧几里得算法的思想是通过一个64位除法来计算商的近似值,并使用它来减少欧几里得算法的迭代次数。不幸的是,近似欧几里得算法不是遗忘的。为了证明近似欧几里得算法的批量执行可以有效地在GPU上实现,我们引入了一种半遗忘顺序算法,它几乎是遗忘的。我们证明近似欧几里得算法可以作为半遗忘算法来实现。实验结果表明,我们在GeForce GTX Titan X GPU上并行实现的1024位整数近似欧几里德算法比Intel Xeon CPU implementation.Â快90倍
Bulk execution of Euclidean algorithms on the CUDA-enabled GPU
The bulk execution of a sequential algorithm is to execute it for many different inputs in turn or at the same time. A sequential algorithm is oblivious if the address accessed at each time unit is independent of the input. It is known that the bulk execution of an oblivious sequential algorithm can be implemented to run on a GPU very efficiently. The main purpose of our work is to implement the bulk execution of a Euclidean algorithm computing the GCD (Greatest Common Divisor) of two large numbers in a GPU. We first present a new efficient Euclidean algorithm that we call the Approximate Euclidean algorithm. The idea of the Approximate Euclidean algorithm is to compute an approximation of quotient by just one 64-bit division and to use it for reducing the number of iterations of the Euclidean algorithm. Unfortunately, the Approximate Euclidean algorithm is not oblivious. To show that the bulk execution of the Approximate Euclidean algorithm can be implemented efficiently in the GPU, we introduce a semi-oblivious sequential algorithms, which is almost oblivious. We show that the Approximate Euclidean algorithm can be implemented as a semi-oblivious algorithm. The experimental results show that our parallel implementation of the Approximate Euclidean algorithm for 1024- bit integers running on GeForce GTX Titan X GPU is 90 times faster than the Intel Xeon CPU implementation.Â