{"title":"一个可扩展的最大团算法使用Apache Spark","authors":"Amr Elmasry, Ayman Khalafallah, Moustafa Meshry","doi":"10.1109/AICCSA.2016.7945631","DOIUrl":null,"url":null,"abstract":"In this paper, we propose a scalable algorithm for finding the exact solution to the maximum-clique problem. At the heart of our approach lies a multi-phase partitioning strategy, which enables iterative, in-memory processing of graphs. The multi-phase partitioning is tuned for the resources of the machine/cluster to get the best performance. To promote parallelization and scalability on both a cluster-level (distributing the problem on a number of machines) and on a machine-level (using all available cores on each machine), we use Apache Spark. We explore the untraditional usage of distributed frameworks, such as Apache Spark, to distribute computational load, as opposed to distributing big data. We focus on dense graphs, typically with thousands of vertices and a few millions edges; this is in contrast to sparse real-world graphs that don't initially fit into the memory of a single driver machine. Our experiments show that, for large dense graphs, we get up to 100% performance speedup compared to the state-of-the-art parallel approaches. Moreover, our algorithm is highly scalable and fault-tolerant.","PeriodicalId":448329,"journal":{"name":"2016 IEEE/ACS 13th International Conference of Computer Systems and Applications (AICCSA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A scalable maximum-clique algorithm using Apache Spark\",\"authors\":\"Amr Elmasry, Ayman Khalafallah, Moustafa Meshry\",\"doi\":\"10.1109/AICCSA.2016.7945631\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we propose a scalable algorithm for finding the exact solution to the maximum-clique problem. At the heart of our approach lies a multi-phase partitioning strategy, which enables iterative, in-memory processing of graphs. The multi-phase partitioning is tuned for the resources of the machine/cluster to get the best performance. To promote parallelization and scalability on both a cluster-level (distributing the problem on a number of machines) and on a machine-level (using all available cores on each machine), we use Apache Spark. We explore the untraditional usage of distributed frameworks, such as Apache Spark, to distribute computational load, as opposed to distributing big data. We focus on dense graphs, typically with thousands of vertices and a few millions edges; this is in contrast to sparse real-world graphs that don't initially fit into the memory of a single driver machine. Our experiments show that, for large dense graphs, we get up to 100% performance speedup compared to the state-of-the-art parallel approaches. Moreover, our algorithm is highly scalable and fault-tolerant.\",\"PeriodicalId\":448329,\"journal\":{\"name\":\"2016 IEEE/ACS 13th International Conference of Computer Systems and Applications (AICCSA)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 IEEE/ACS 13th International Conference of Computer Systems and Applications (AICCSA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AICCSA.2016.7945631\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE/ACS 13th International Conference of Computer Systems and Applications (AICCSA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AICCSA.2016.7945631","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A scalable maximum-clique algorithm using Apache Spark
In this paper, we propose a scalable algorithm for finding the exact solution to the maximum-clique problem. At the heart of our approach lies a multi-phase partitioning strategy, which enables iterative, in-memory processing of graphs. The multi-phase partitioning is tuned for the resources of the machine/cluster to get the best performance. To promote parallelization and scalability on both a cluster-level (distributing the problem on a number of machines) and on a machine-level (using all available cores on each machine), we use Apache Spark. We explore the untraditional usage of distributed frameworks, such as Apache Spark, to distribute computational load, as opposed to distributing big data. We focus on dense graphs, typically with thousands of vertices and a few millions edges; this is in contrast to sparse real-world graphs that don't initially fit into the memory of a single driver machine. Our experiments show that, for large dense graphs, we get up to 100% performance speedup compared to the state-of-the-art parallel approaches. Moreover, our algorithm is highly scalable and fault-tolerant.