Süreyya Emre Kurt, S. Raje, Aravind Sukumaran-Rajam, P. Sadayappan
{"title":"稀疏感知张量分解","authors":"Süreyya Emre Kurt, S. Raje, Aravind Sukumaran-Rajam, P. Sadayappan","doi":"10.1109/ipdps53621.2022.00097","DOIUrl":null,"url":null,"abstract":"Sparse tensor decomposition, such as Canonical Polyadic Decomposition (CPD), is a key operation for data analytics and machine learning. Its computation is dominated by a set of MTTKRP (Matricized Tensor Times Khatri Rao Product) operations. The collection of required MTTKRP operations for sparse CPD include common sub-computations across them and many approaches exist to factorize and reuse common sub-expressions. Prior work on sparse CPD has focused on minimizing the number of high-level operators. In this paper, we consider a design space that covers whether the partial MTTKRP results should be saved, different mode permutations and model the total volume of data movement to/from memory. Also, we propose a fine-grained load balancing method that supports higher levels of parallelization.","PeriodicalId":321801,"journal":{"name":"2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"515 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Sparsity-Aware Tensor Decomposition\",\"authors\":\"Süreyya Emre Kurt, S. Raje, Aravind Sukumaran-Rajam, P. Sadayappan\",\"doi\":\"10.1109/ipdps53621.2022.00097\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Sparse tensor decomposition, such as Canonical Polyadic Decomposition (CPD), is a key operation for data analytics and machine learning. Its computation is dominated by a set of MTTKRP (Matricized Tensor Times Khatri Rao Product) operations. The collection of required MTTKRP operations for sparse CPD include common sub-computations across them and many approaches exist to factorize and reuse common sub-expressions. Prior work on sparse CPD has focused on minimizing the number of high-level operators. In this paper, we consider a design space that covers whether the partial MTTKRP results should be saved, different mode permutations and model the total volume of data movement to/from memory. Also, we propose a fine-grained load balancing method that supports higher levels of parallelization.\",\"PeriodicalId\":321801,\"journal\":{\"name\":\"2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS)\",\"volume\":\"515 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ipdps53621.2022.00097\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ipdps53621.2022.00097","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Sparse tensor decomposition, such as Canonical Polyadic Decomposition (CPD), is a key operation for data analytics and machine learning. Its computation is dominated by a set of MTTKRP (Matricized Tensor Times Khatri Rao Product) operations. The collection of required MTTKRP operations for sparse CPD include common sub-computations across them and many approaches exist to factorize and reuse common sub-expressions. Prior work on sparse CPD has focused on minimizing the number of high-level operators. In this paper, we consider a design space that covers whether the partial MTTKRP results should be saved, different mode permutations and model the total volume of data movement to/from memory. Also, we propose a fine-grained load balancing method that supports higher levels of parallelization.