{"title":"平行广义张量乘法","authors":"Can Kavaklioglu, A. Cemgil","doi":"10.1109/SIU.2012.6204612","DOIUrl":null,"url":null,"abstract":"Tensor factorization is a frequently used modelling tool in problems involving large amounts of n-way data. Probabilistic Latent Tensor Factorization framework provides a probabilistic approach to solve the tensor factorization problem. The iterative algorithms use generalized tensor multiplication operations involving large amounts of arithmetic operations with similar structures. This work shows the performance improvements achieved by performing the independent operations on a graphical processing unit (GPU).","PeriodicalId":256154,"journal":{"name":"2012 20th Signal Processing and Communications Applications Conference (SIU)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Parallel generalized tensor multiplication\",\"authors\":\"Can Kavaklioglu, A. Cemgil\",\"doi\":\"10.1109/SIU.2012.6204612\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Tensor factorization is a frequently used modelling tool in problems involving large amounts of n-way data. Probabilistic Latent Tensor Factorization framework provides a probabilistic approach to solve the tensor factorization problem. The iterative algorithms use generalized tensor multiplication operations involving large amounts of arithmetic operations with similar structures. This work shows the performance improvements achieved by performing the independent operations on a graphical processing unit (GPU).\",\"PeriodicalId\":256154,\"journal\":{\"name\":\"2012 20th Signal Processing and Communications Applications Conference (SIU)\",\"volume\":\"45 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-04-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 20th Signal Processing and Communications Applications Conference (SIU)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SIU.2012.6204612\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 20th Signal Processing and Communications Applications Conference (SIU)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SIU.2012.6204612","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Tensor factorization is a frequently used modelling tool in problems involving large amounts of n-way data. Probabilistic Latent Tensor Factorization framework provides a probabilistic approach to solve the tensor factorization problem. The iterative algorithms use generalized tensor multiplication operations involving large amounts of arithmetic operations with similar structures. This work shows the performance improvements achieved by performing the independent operations on a graphical processing unit (GPU).