Tong Zhu, Shuqin Li, Chunquan Liang, B. Liu, Xiaopeng Li
{"title":"整合自关注机制的产品点击率预测模型","authors":"Tong Zhu, Shuqin Li, Chunquan Liang, B. Liu, Xiaopeng Li","doi":"10.1109/CTISC52352.2021.00056","DOIUrl":null,"url":null,"abstract":"In the commodity click-through rate prediction task, existing deep learning models implicitly construct combinatorial features and cannot know the optimal order of the combinatorial features that can be learned; at the same time, the intrinsic correlation between features is ignored, and invalid feature combinations will bring unnecessary noise to the model. To address these problems, a product click-through prediction model (ACDeepFM) incorporating a self-attentive mechanism is proposed, which first uses the self-attentive mechanism to mine the intrinsic connections among input features and adaptively models the weights of input features. Then a compressed interaction network is added to precisely mine the effect of different order combinations of features on the model prediction results. Then deep neural networks are added to fit complex interaction scenarios between users and items. Finally, the information extracted from the self-attentive mechanism module, the deep neural network module and the compressed interaction network module are fed into the subsequent multilayer perceptron layer to further learn meaningful combinatorial features. Experimental results on two publicly available datasets show that the proposed model achieves higher AUC values and lower Logloss values relative to FM, DNN, DeepFM and xDeepFM models, validating the effectiveness of the ACDeepFM model.","PeriodicalId":268378,"journal":{"name":"2021 3rd International Conference on Advances in Computer Technology, Information Science and Communication (CTISC)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Product click-through rate prediction model integrating self-attention mechanism\",\"authors\":\"Tong Zhu, Shuqin Li, Chunquan Liang, B. Liu, Xiaopeng Li\",\"doi\":\"10.1109/CTISC52352.2021.00056\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the commodity click-through rate prediction task, existing deep learning models implicitly construct combinatorial features and cannot know the optimal order of the combinatorial features that can be learned; at the same time, the intrinsic correlation between features is ignored, and invalid feature combinations will bring unnecessary noise to the model. To address these problems, a product click-through prediction model (ACDeepFM) incorporating a self-attentive mechanism is proposed, which first uses the self-attentive mechanism to mine the intrinsic connections among input features and adaptively models the weights of input features. Then a compressed interaction network is added to precisely mine the effect of different order combinations of features on the model prediction results. Then deep neural networks are added to fit complex interaction scenarios between users and items. Finally, the information extracted from the self-attentive mechanism module, the deep neural network module and the compressed interaction network module are fed into the subsequent multilayer perceptron layer to further learn meaningful combinatorial features. Experimental results on two publicly available datasets show that the proposed model achieves higher AUC values and lower Logloss values relative to FM, DNN, DeepFM and xDeepFM models, validating the effectiveness of the ACDeepFM model.\",\"PeriodicalId\":268378,\"journal\":{\"name\":\"2021 3rd International Conference on Advances in Computer Technology, Information Science and Communication (CTISC)\",\"volume\":\"51 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 3rd International Conference on Advances in Computer Technology, Information Science and Communication (CTISC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CTISC52352.2021.00056\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 3rd International Conference on Advances in Computer Technology, Information Science and Communication (CTISC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CTISC52352.2021.00056","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Product click-through rate prediction model integrating self-attention mechanism
In the commodity click-through rate prediction task, existing deep learning models implicitly construct combinatorial features and cannot know the optimal order of the combinatorial features that can be learned; at the same time, the intrinsic correlation between features is ignored, and invalid feature combinations will bring unnecessary noise to the model. To address these problems, a product click-through prediction model (ACDeepFM) incorporating a self-attentive mechanism is proposed, which first uses the self-attentive mechanism to mine the intrinsic connections among input features and adaptively models the weights of input features. Then a compressed interaction network is added to precisely mine the effect of different order combinations of features on the model prediction results. Then deep neural networks are added to fit complex interaction scenarios between users and items. Finally, the information extracted from the self-attentive mechanism module, the deep neural network module and the compressed interaction network module are fed into the subsequent multilayer perceptron layer to further learn meaningful combinatorial features. Experimental results on two publicly available datasets show that the proposed model achieves higher AUC values and lower Logloss values relative to FM, DNN, DeepFM and xDeepFM models, validating the effectiveness of the ACDeepFM model.