{"title":"基于加速AO-ADMM的约束张量分解","authors":"Shaden Smith, Alec Beri, G. Karypis","doi":"10.1109/ICPP.2017.20","DOIUrl":null,"url":null,"abstract":"Low-rank sparse tensor factorization is a populartool for analyzing multi-way data and is used in domainssuch as recommender systems, precision healthcare, and cybersecurity.Imposing constraints on a factorization, such asnon-negativity or sparsity, is a natural way of encoding priorknowledge of the multi-way data. While constrained factorizationsare useful for practitioners, they can greatly increasefactorization time due to slower convergence and computationaloverheads. Recently, a hybrid of alternating optimization andalternating direction method of multipliers (AO-ADMM) wasshown to have both a high convergence rate and the ability tonaturally incorporate a variety of popular constraints. In thiswork, we present a parallelization strategy and two approachesfor accelerating AO-ADMM. By redefining the convergencecriteria of the inner ADMM iterations, we are able to splitthe data in a way that not only accelerates the per-iterationconvergence, but also speeds up the execution of the ADMMiterations due to efficient use of cache resources. Secondly,we develop a method of exploiting dynamic sparsity in thefactors to speed up tensor-matrix kernels. These combinedadvancements achieve up to 8 speedup over the state-of-the art on a variety of real-world sparse tensors.","PeriodicalId":392710,"journal":{"name":"2017 46th International Conference on Parallel Processing (ICPP)","volume":"81 6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"25","resultStr":"{\"title\":\"Constrained Tensor Factorization with Accelerated AO-ADMM\",\"authors\":\"Shaden Smith, Alec Beri, G. Karypis\",\"doi\":\"10.1109/ICPP.2017.20\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Low-rank sparse tensor factorization is a populartool for analyzing multi-way data and is used in domainssuch as recommender systems, precision healthcare, and cybersecurity.Imposing constraints on a factorization, such asnon-negativity or sparsity, is a natural way of encoding priorknowledge of the multi-way data. While constrained factorizationsare useful for practitioners, they can greatly increasefactorization time due to slower convergence and computationaloverheads. Recently, a hybrid of alternating optimization andalternating direction method of multipliers (AO-ADMM) wasshown to have both a high convergence rate and the ability tonaturally incorporate a variety of popular constraints. In thiswork, we present a parallelization strategy and two approachesfor accelerating AO-ADMM. By redefining the convergencecriteria of the inner ADMM iterations, we are able to splitthe data in a way that not only accelerates the per-iterationconvergence, but also speeds up the execution of the ADMMiterations due to efficient use of cache resources. Secondly,we develop a method of exploiting dynamic sparsity in thefactors to speed up tensor-matrix kernels. These combinedadvancements achieve up to 8 speedup over the state-of-the art on a variety of real-world sparse tensors.\",\"PeriodicalId\":392710,\"journal\":{\"name\":\"2017 46th International Conference on Parallel Processing (ICPP)\",\"volume\":\"81 6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"25\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 46th International Conference on Parallel Processing (ICPP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICPP.2017.20\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 46th International Conference on Parallel Processing (ICPP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPP.2017.20","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Constrained Tensor Factorization with Accelerated AO-ADMM
Low-rank sparse tensor factorization is a populartool for analyzing multi-way data and is used in domainssuch as recommender systems, precision healthcare, and cybersecurity.Imposing constraints on a factorization, such asnon-negativity or sparsity, is a natural way of encoding priorknowledge of the multi-way data. While constrained factorizationsare useful for practitioners, they can greatly increasefactorization time due to slower convergence and computationaloverheads. Recently, a hybrid of alternating optimization andalternating direction method of multipliers (AO-ADMM) wasshown to have both a high convergence rate and the ability tonaturally incorporate a variety of popular constraints. In thiswork, we present a parallelization strategy and two approachesfor accelerating AO-ADMM. By redefining the convergencecriteria of the inner ADMM iterations, we are able to splitthe data in a way that not only accelerates the per-iterationconvergence, but also speeds up the execution of the ADMMiterations due to efficient use of cache resources. Secondly,we develop a method of exploiting dynamic sparsity in thefactors to speed up tensor-matrix kernels. These combinedadvancements achieve up to 8 speedup over the state-of-the art on a variety of real-world sparse tensors.