{"title":"MAE4Rec:存储节省变压器顺序建议","authors":"Kesen Zhao, Xiangyu Zhao, Zijian Zhang, Muyang Li","doi":"10.1145/3511808.3557461","DOIUrl":null,"url":null,"abstract":"Sequential recommender systems (SRS) aim to infer the users' preferences from their interaction history and predict items that will be of interest to the users. The majority of SRS models typically incorporate all historical interactions for next-item recommendations. Despite their success, feeding all interactions into the model without filtering may lead to severe practical issues: (i) redundant interactions hinder the SRS model from capturing the users' intentions; (ii) the computational cost is huge, as the computational complexity is proportional to the length of the interaction sequence; (iii) more memory space is necessitated to store all interaction records from all users. To this end, we propose a novel storage-saving SRS framework, MAE4Rec, based on a unidirectional self-attentive mechanism and masked autoencoder. Specifically, in order to lower the storage consumption, MAE4Rec first masks and discards a large percentage of historical interactions, and then infers the next interacted item solely based on the latent representation of unmarked ones. Experiments on two real-world datasets demonstrate that the proposed model achieves competitive performance against state-of-the-art SRS models with more than 40% compression of storage.","PeriodicalId":389624,"journal":{"name":"Proceedings of the 31st ACM International Conference on Information & Knowledge Management","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"MAE4Rec: Storage-saving Transformer for Sequential Recommendations\",\"authors\":\"Kesen Zhao, Xiangyu Zhao, Zijian Zhang, Muyang Li\",\"doi\":\"10.1145/3511808.3557461\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Sequential recommender systems (SRS) aim to infer the users' preferences from their interaction history and predict items that will be of interest to the users. The majority of SRS models typically incorporate all historical interactions for next-item recommendations. Despite their success, feeding all interactions into the model without filtering may lead to severe practical issues: (i) redundant interactions hinder the SRS model from capturing the users' intentions; (ii) the computational cost is huge, as the computational complexity is proportional to the length of the interaction sequence; (iii) more memory space is necessitated to store all interaction records from all users. To this end, we propose a novel storage-saving SRS framework, MAE4Rec, based on a unidirectional self-attentive mechanism and masked autoencoder. Specifically, in order to lower the storage consumption, MAE4Rec first masks and discards a large percentage of historical interactions, and then infers the next interacted item solely based on the latent representation of unmarked ones. Experiments on two real-world datasets demonstrate that the proposed model achieves competitive performance against state-of-the-art SRS models with more than 40% compression of storage.\",\"PeriodicalId\":389624,\"journal\":{\"name\":\"Proceedings of the 31st ACM International Conference on Information & Knowledge Management\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 31st ACM International Conference on Information & Knowledge Management\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3511808.3557461\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 31st ACM International Conference on Information & Knowledge Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3511808.3557461","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
MAE4Rec: Storage-saving Transformer for Sequential Recommendations
Sequential recommender systems (SRS) aim to infer the users' preferences from their interaction history and predict items that will be of interest to the users. The majority of SRS models typically incorporate all historical interactions for next-item recommendations. Despite their success, feeding all interactions into the model without filtering may lead to severe practical issues: (i) redundant interactions hinder the SRS model from capturing the users' intentions; (ii) the computational cost is huge, as the computational complexity is proportional to the length of the interaction sequence; (iii) more memory space is necessitated to store all interaction records from all users. To this end, we propose a novel storage-saving SRS framework, MAE4Rec, based on a unidirectional self-attentive mechanism and masked autoencoder. Specifically, in order to lower the storage consumption, MAE4Rec first masks and discards a large percentage of historical interactions, and then infers the next interacted item solely based on the latent representation of unmarked ones. Experiments on two real-world datasets demonstrate that the proposed model achieves competitive performance against state-of-the-art SRS models with more than 40% compression of storage.