{"title":"推荐中的非负稀疏线性自编码器迭代更新方案","authors":"Xuan Li, Shifei Ding","doi":"10.1016/j.ipm.2025.104314","DOIUrl":null,"url":null,"abstract":"<div><div>Linear autoencoder models with nonnegative constraints and L1 regularization, such as the sparse linear method (SLIM), have shown remarkable performance while maintaining interpretability. However, their practicality is limited by computationally expensive training processes. This paper proposes a simple yet effective training framework for nonnegative and sparse linear autoencoders. We first develop a simple iterative update scheme (IUS) for SLIM and present a theoretical analysis of its convergence and correctness. To enhance computational efficiency, we then introduce a filtering step that prunes insignificant parameters at each iteration in practice. Based on this training scheme, we derive two model variants by removing the zero-diagonal constraint and utilizing random dropout denoising to replace L2 regularization (i.e., the dropout-based regularization in DLAE), respectively. Experimental results demonstrate that the proposed IUS algorithm reduces training time by 53.8–68.5% and memory usage by 55.6% compared to the alternating direction method of multipliers (ADMM) across six benchmark datasets. The proposed model variants achieve comparable or superior performance to state-of-the-art collaborative filtering models on all real-world datasets. These findings validate the proposed training framework’s capability to enable feasible deployment of SLIM-like models in efficiency-critical and resource-constrained environments.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 1","pages":"Article 104314"},"PeriodicalIF":6.9000,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Iterative update scheme for nonnegative and sparse linear autoencoders in recommendation\",\"authors\":\"Xuan Li, Shifei Ding\",\"doi\":\"10.1016/j.ipm.2025.104314\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Linear autoencoder models with nonnegative constraints and L1 regularization, such as the sparse linear method (SLIM), have shown remarkable performance while maintaining interpretability. However, their practicality is limited by computationally expensive training processes. This paper proposes a simple yet effective training framework for nonnegative and sparse linear autoencoders. We first develop a simple iterative update scheme (IUS) for SLIM and present a theoretical analysis of its convergence and correctness. To enhance computational efficiency, we then introduce a filtering step that prunes insignificant parameters at each iteration in practice. Based on this training scheme, we derive two model variants by removing the zero-diagonal constraint and utilizing random dropout denoising to replace L2 regularization (i.e., the dropout-based regularization in DLAE), respectively. Experimental results demonstrate that the proposed IUS algorithm reduces training time by 53.8–68.5% and memory usage by 55.6% compared to the alternating direction method of multipliers (ADMM) across six benchmark datasets. The proposed model variants achieve comparable or superior performance to state-of-the-art collaborative filtering models on all real-world datasets. These findings validate the proposed training framework’s capability to enable feasible deployment of SLIM-like models in efficiency-critical and resource-constrained environments.</div></div>\",\"PeriodicalId\":50365,\"journal\":{\"name\":\"Information Processing & Management\",\"volume\":\"63 1\",\"pages\":\"Article 104314\"},\"PeriodicalIF\":6.9000,\"publicationDate\":\"2025-08-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Processing & Management\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0306457325002559\",\"RegionNum\":1,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Processing & Management","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306457325002559","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Iterative update scheme for nonnegative and sparse linear autoencoders in recommendation
Linear autoencoder models with nonnegative constraints and L1 regularization, such as the sparse linear method (SLIM), have shown remarkable performance while maintaining interpretability. However, their practicality is limited by computationally expensive training processes. This paper proposes a simple yet effective training framework for nonnegative and sparse linear autoencoders. We first develop a simple iterative update scheme (IUS) for SLIM and present a theoretical analysis of its convergence and correctness. To enhance computational efficiency, we then introduce a filtering step that prunes insignificant parameters at each iteration in practice. Based on this training scheme, we derive two model variants by removing the zero-diagonal constraint and utilizing random dropout denoising to replace L2 regularization (i.e., the dropout-based regularization in DLAE), respectively. Experimental results demonstrate that the proposed IUS algorithm reduces training time by 53.8–68.5% and memory usage by 55.6% compared to the alternating direction method of multipliers (ADMM) across six benchmark datasets. The proposed model variants achieve comparable or superior performance to state-of-the-art collaborative filtering models on all real-world datasets. These findings validate the proposed training framework’s capability to enable feasible deployment of SLIM-like models in efficiency-critical and resource-constrained environments.
期刊介绍:
Information Processing and Management is dedicated to publishing cutting-edge original research at the convergence of computing and information science. Our scope encompasses theory, methods, and applications across various domains, including advertising, business, health, information science, information technology marketing, and social computing.
We aim to cater to the interests of both primary researchers and practitioners by offering an effective platform for the timely dissemination of advanced and topical issues in this interdisciplinary field. The journal places particular emphasis on original research articles, research survey articles, research method articles, and articles addressing critical applications of research. Join us in advancing knowledge and innovation at the intersection of computing and information science.