{"title":"基于MapReduce计算模型的高效频繁项集挖掘算法IOMRA","authors":"Sheng-Hui Liu, Shi-Jia Liu, Shi-Xuan Chen, Kun-Ming Yu","doi":"10.1109/CSE.2014.247","DOIUrl":null,"url":null,"abstract":"The goal of Frequent Item set Mining (FIM) is to find the biggest number of frequently used subsets from a big transaction database. In previous studies, using the advantage of multicore computing, the execution time of an Apriori algorithm was sharply decreased: when the size of a data set was more than TBs and a single host had been unable to afford a large number of operations by using a number of computers connected into a super computer to speed up execution as being the obvious solution. Some parallel Apriori algorithms, based on the MapReduce framework, have been proposed. However, with these algorithms, memory would be quickly exhausted and communication cost would rise sharply. This would greatly reduce execution efficiency. In this paper, we present an improved reformative Apriori algorithm that uses the length of each transaction to determine the size of the maximum merge candidate item sets. By reducing the production of low frequency item sets in Map function, memory exhaustion is ameliorated, greatly improving execution efficiency.","PeriodicalId":258990,"journal":{"name":"2014 IEEE 17th International Conference on Computational Science and Engineering","volume":"50 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"IOMRA - A High Efficiency Frequent Itemset Mining Algorithm Based on the MapReduce Computation Model\",\"authors\":\"Sheng-Hui Liu, Shi-Jia Liu, Shi-Xuan Chen, Kun-Ming Yu\",\"doi\":\"10.1109/CSE.2014.247\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The goal of Frequent Item set Mining (FIM) is to find the biggest number of frequently used subsets from a big transaction database. In previous studies, using the advantage of multicore computing, the execution time of an Apriori algorithm was sharply decreased: when the size of a data set was more than TBs and a single host had been unable to afford a large number of operations by using a number of computers connected into a super computer to speed up execution as being the obvious solution. Some parallel Apriori algorithms, based on the MapReduce framework, have been proposed. However, with these algorithms, memory would be quickly exhausted and communication cost would rise sharply. This would greatly reduce execution efficiency. In this paper, we present an improved reformative Apriori algorithm that uses the length of each transaction to determine the size of the maximum merge candidate item sets. By reducing the production of low frequency item sets in Map function, memory exhaustion is ameliorated, greatly improving execution efficiency.\",\"PeriodicalId\":258990,\"journal\":{\"name\":\"2014 IEEE 17th International Conference on Computational Science and Engineering\",\"volume\":\"50 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-12-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 IEEE 17th International Conference on Computational Science and Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CSE.2014.247\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE 17th International Conference on Computational Science and Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSE.2014.247","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
IOMRA - A High Efficiency Frequent Itemset Mining Algorithm Based on the MapReduce Computation Model
The goal of Frequent Item set Mining (FIM) is to find the biggest number of frequently used subsets from a big transaction database. In previous studies, using the advantage of multicore computing, the execution time of an Apriori algorithm was sharply decreased: when the size of a data set was more than TBs and a single host had been unable to afford a large number of operations by using a number of computers connected into a super computer to speed up execution as being the obvious solution. Some parallel Apriori algorithms, based on the MapReduce framework, have been proposed. However, with these algorithms, memory would be quickly exhausted and communication cost would rise sharply. This would greatly reduce execution efficiency. In this paper, we present an improved reformative Apriori algorithm that uses the length of each transaction to determine the size of the maximum merge candidate item sets. By reducing the production of low frequency item sets in Map function, memory exhaustion is ameliorated, greatly improving execution efficiency.