{"title":"Sensitivity loss training based implicit feedback","authors":"Kunyu Li, Nan Wang, Xinyu Liu","doi":"10.1109/ICPADS53394.2021.00036","DOIUrl":null,"url":null,"abstract":"In recommender systems, due to the lack of explicit feedback features, datasets with implicit feedback are always accustomed to train all samples without separating them during model training, without considering the non-consistency of samples. This leads to a significant decrease in sample utilization and creates challenges for model training. Also, little work has been done to explore the intrinsic laws implied in the implicit feedback dataset and how to effectively train the implicit feedback data. In this paper, we first summarize the variation pattern of loss with model training for different rating samples in the explicit feedback dataset, and find that model training is highly sensitive to the ratings. Second, we design an adaptive hierarchical training function with dynamic thresholds that can effectively distinguish different rating samples in the dataset, thus optimizing the implicit feedback dataset into an explicit feedback dataset to some extent. Finally, to better learn samples with different ratings, we also propose an adaptive hierarchical training strategy to obtain better training results in the implicit feedback dataset. Extensive experiments on three datasets show that our approach achieves excellent performance and greatly improves the performance of the model.","PeriodicalId":309508,"journal":{"name":"2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPADS53394.2021.00036","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In recommender systems, due to the lack of explicit feedback features, datasets with implicit feedback are always accustomed to train all samples without separating them during model training, without considering the non-consistency of samples. This leads to a significant decrease in sample utilization and creates challenges for model training. Also, little work has been done to explore the intrinsic laws implied in the implicit feedback dataset and how to effectively train the implicit feedback data. In this paper, we first summarize the variation pattern of loss with model training for different rating samples in the explicit feedback dataset, and find that model training is highly sensitive to the ratings. Second, we design an adaptive hierarchical training function with dynamic thresholds that can effectively distinguish different rating samples in the dataset, thus optimizing the implicit feedback dataset into an explicit feedback dataset to some extent. Finally, to better learn samples with different ratings, we also propose an adaptive hierarchical training strategy to obtain better training results in the implicit feedback dataset. Extensive experiments on three datasets show that our approach achieves excellent performance and greatly improves the performance of the model.