{"title":"面向顺序推荐的互感器双向编码器表示研究","authors":"Amine Kheldouni, J. Boumhidi","doi":"10.1109/ISCV54655.2022.9806062","DOIUrl":null,"url":null,"abstract":"Sequential recommender systems seek to capture information about user affinities and behaviors considering their sequential series of interactions. While former models based on Markov Chains and Recurrent Neural Networks were used to model future interactions with the last interaction, sequential recommendation aspires to express more than that. In this paper, we detail BERT4Rec, a sequential recommendation approach, based on bidirectional encoder of self-attention based Transformer mechanisms. Such model, inspired by recent advances in NLP, learns to predict a masked item in a user’s sequence using the Cloze objective. The model unmasks the randomly masked items by capturing attention states from both left and right sides and short-term and long-term contexts. Such a sequential model has been applied in what follows to new datasets from Amazon reviews of different characteristics. Thus, our main contribution consists of testing this approach and its limits on several recommendation datasets.","PeriodicalId":426665,"journal":{"name":"2022 International Conference on Intelligent Systems and Computer Vision (ISCV)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Study of Bidirectional Encoder Representations from Transformers for Sequential Recommendations\",\"authors\":\"Amine Kheldouni, J. Boumhidi\",\"doi\":\"10.1109/ISCV54655.2022.9806062\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Sequential recommender systems seek to capture information about user affinities and behaviors considering their sequential series of interactions. While former models based on Markov Chains and Recurrent Neural Networks were used to model future interactions with the last interaction, sequential recommendation aspires to express more than that. In this paper, we detail BERT4Rec, a sequential recommendation approach, based on bidirectional encoder of self-attention based Transformer mechanisms. Such model, inspired by recent advances in NLP, learns to predict a masked item in a user’s sequence using the Cloze objective. The model unmasks the randomly masked items by capturing attention states from both left and right sides and short-term and long-term contexts. Such a sequential model has been applied in what follows to new datasets from Amazon reviews of different characteristics. Thus, our main contribution consists of testing this approach and its limits on several recommendation datasets.\",\"PeriodicalId\":426665,\"journal\":{\"name\":\"2022 International Conference on Intelligent Systems and Computer Vision (ISCV)\",\"volume\":\"22 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Conference on Intelligent Systems and Computer Vision (ISCV)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCV54655.2022.9806062\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Intelligent Systems and Computer Vision (ISCV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCV54655.2022.9806062","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Study of Bidirectional Encoder Representations from Transformers for Sequential Recommendations
Sequential recommender systems seek to capture information about user affinities and behaviors considering their sequential series of interactions. While former models based on Markov Chains and Recurrent Neural Networks were used to model future interactions with the last interaction, sequential recommendation aspires to express more than that. In this paper, we detail BERT4Rec, a sequential recommendation approach, based on bidirectional encoder of self-attention based Transformer mechanisms. Such model, inspired by recent advances in NLP, learns to predict a masked item in a user’s sequence using the Cloze objective. The model unmasks the randomly masked items by capturing attention states from both left and right sides and short-term and long-term contexts. Such a sequential model has been applied in what follows to new datasets from Amazon reviews of different characteristics. Thus, our main contribution consists of testing this approach and its limits on several recommendation datasets.