{"title":"PLATE: A Prompt-Enhanced Paradigm for Multi-Scenario Recommendations","authors":"Yuhao Wang, Xiangyu Zhao, Bo Chen, Qidong Liu, Huifeng Guo, Huanshuo Liu, Yichao Wang, Rui Zhang, Ruiming Tang","doi":"10.1145/3539618.3591750","DOIUrl":null,"url":null,"abstract":"With the explosive growth of commercial applications of recommender systems, multi-scenario recommendation (MSR) has attracted considerable attention, which utilizes data from multiple domains to improve their recommendation performance simultaneously. However, training a unified deep recommender system (DRS) may not explicitly comprehend the commonality and difference among domains, whereas training an individual model for each domain neglects the global information and incurs high computation costs. Likewise, fine-tuning on each domain is inefficient, and recent advances that apply the prompt tuning technique to improve fine-tuning efficiency rely solely on large-sized transformers. In this work, we propose a novel prompt-enhanced paradigm for multi-scenario recommendation. Specifically, a unified DRS backbone model is first pre-trained using data from all the domains in order to capture the commonality across domains. Then, we conduct prompt tuning with two novel prompt modules, capturing the distinctions among various domains and users. Our experiments on Douban, Amazon, and Ali-CCP datasets demonstrate the effectiveness of the proposed paradigm with two noticeable strengths: (i) its great compatibility with various DRS backbone models, and (ii) its high computation and storage efficiency with only 6% trainable parameters in prompt tuning phase. The implementation code is available for easy reproduction.","PeriodicalId":425056,"journal":{"name":"Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":"48 41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3539618.3591750","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
With the explosive growth of commercial applications of recommender systems, multi-scenario recommendation (MSR) has attracted considerable attention, which utilizes data from multiple domains to improve their recommendation performance simultaneously. However, training a unified deep recommender system (DRS) may not explicitly comprehend the commonality and difference among domains, whereas training an individual model for each domain neglects the global information and incurs high computation costs. Likewise, fine-tuning on each domain is inefficient, and recent advances that apply the prompt tuning technique to improve fine-tuning efficiency rely solely on large-sized transformers. In this work, we propose a novel prompt-enhanced paradigm for multi-scenario recommendation. Specifically, a unified DRS backbone model is first pre-trained using data from all the domains in order to capture the commonality across domains. Then, we conduct prompt tuning with two novel prompt modules, capturing the distinctions among various domains and users. Our experiments on Douban, Amazon, and Ali-CCP datasets demonstrate the effectiveness of the proposed paradigm with two noticeable strengths: (i) its great compatibility with various DRS backbone models, and (ii) its high computation and storage efficiency with only 6% trainable parameters in prompt tuning phase. The implementation code is available for easy reproduction.