{"title":"Locally enhanced denoising self-attention networks and decoupled position encoding for sequential recommendation","authors":"Xingyao Yang, Xinsheng Dong, Jiong Yu, Shuangquan Li, Xinyu Xiong, Hongtao Shen","doi":"10.1016/j.compeleceng.2025.110064","DOIUrl":null,"url":null,"abstract":"<div><div>Most of the existing Transformer-based models have been shown to have great advantages in sequential recommendation by modeling temporal dynamics through the self-attention mechanism. Nevertheless, the original self-attention mechanism requires the equal weighting and computation of all interactions between each item and every other item. This method presents limitations in effectively capturing shifts in users’ local interests. In addition, this approach ignores the noise in user data, while absolute position encoding leads to inaccurate sequential relations between items. An innovative Locally Enhanced Denoising Self-Attention Network and Decoupled Position Encoding for Sequential Recommendation, named LEDADP, is presented to resolve these issues. Specifically, we use the noise filtering module to convert the original data into the frequency domain to reduce the noise and achieve the purpose of filtering the noise. We integrate convolution into self-attention for local interest transfer, and we provide a multi-scale local enhanced convolution module that models local dependencies taking into account various local preferences at several scales, collecting more detailed local semantic information. Furthermore, in order to more precisely depict the sequential link between items, we additionally employ decoupled position encoding. Extensive experiments conducted on three real-world datasets : Beauty, Toys, and ML-1M. The experimental results show that compared with the suboptimal model, the proposed model has respectively improved by 2.87%, 7.83% and 4.1% on Recall@5, and by 2.03%, 4.08% and 6.8% on NDCG@5, which proves the validity of the model.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"123 ","pages":"Article 110064"},"PeriodicalIF":4.0000,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Electrical Engineering","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0045790625000072","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Most of the existing Transformer-based models have been shown to have great advantages in sequential recommendation by modeling temporal dynamics through the self-attention mechanism. Nevertheless, the original self-attention mechanism requires the equal weighting and computation of all interactions between each item and every other item. This method presents limitations in effectively capturing shifts in users’ local interests. In addition, this approach ignores the noise in user data, while absolute position encoding leads to inaccurate sequential relations between items. An innovative Locally Enhanced Denoising Self-Attention Network and Decoupled Position Encoding for Sequential Recommendation, named LEDADP, is presented to resolve these issues. Specifically, we use the noise filtering module to convert the original data into the frequency domain to reduce the noise and achieve the purpose of filtering the noise. We integrate convolution into self-attention for local interest transfer, and we provide a multi-scale local enhanced convolution module that models local dependencies taking into account various local preferences at several scales, collecting more detailed local semantic information. Furthermore, in order to more precisely depict the sequential link between items, we additionally employ decoupled position encoding. Extensive experiments conducted on three real-world datasets : Beauty, Toys, and ML-1M. The experimental results show that compared with the suboptimal model, the proposed model has respectively improved by 2.87%, 7.83% and 4.1% on Recall@5, and by 2.03%, 4.08% and 6.8% on NDCG@5, which proves the validity of the model.
期刊介绍:
The impact of computers has nowhere been more revolutionary than in electrical engineering. The design, analysis, and operation of electrical and electronic systems are now dominated by computers, a transformation that has been motivated by the natural ease of interface between computers and electrical systems, and the promise of spectacular improvements in speed and efficiency.
Published since 1973, Computers & Electrical Engineering provides rapid publication of topical research into the integration of computer technology and computational techniques with electrical and electronic systems. The journal publishes papers featuring novel implementations of computers and computational techniques in areas like signal and image processing, high-performance computing, parallel processing, and communications. Special attention will be paid to papers describing innovative architectures, algorithms, and software tools.