{"title":"基于注意机制和密集网络模型的电子商务特色商品推荐系统研究","authors":"Daocai Wang","doi":"10.1016/j.sasc.2025.200216","DOIUrl":null,"url":null,"abstract":"<div><div>This paper constructs two cross-domain recommendation models based on the perspective of user sharing and non-sharing, both of which rely on intensive convolutional networks and attention mechanisms. This research introduces lightweight Dense Net and fine-grained pruning for model optimization. Lightweight Dense Net retains the core advantages by optimizing the repeat structure while reducing redundant parameters. Compared with the original network, the accuracy loss is not >2 %, the number of parameters is reduced to 204.96Mb, the compression ratio is 8.38, and the computational amount is reduced by 0.96Gflops, which facilitates the hardware deployment. Given the problem that lightweight Dense Net has no practical optimization in storage and computing after sparsing, this paper innovatively proposes a CSB compression storage method and supporting sparse convolution algorithm, which can effectively reduce the computing and storage requirements of inference network, realize the real computing acceleration and storage optimization, and overcome the hardware deployment problems. Compared with the original network, the accuracy loss is not >2 %, the number of parameters is reduced to 204.96Mb, the compression ratio is 8.38, and the computational amount is reduced by 0.96Gflops, which facilitates the hardware deployment. Given the problem that lightweight Dense Net has no practical optimization in storage and computing after sparsing, this paper innovatively proposes a CSB compression storage method and supporting sparse convolution algorithm, which can effectively reduce the computing and storage requirements of inference network, realize the real computing acceleration and storage optimization, and overcome the hardware deployment problems.</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"7 ","pages":"Article 200216"},"PeriodicalIF":3.6000,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Research on e-commerce special commodity recommendation system based on attention mechanism and Dense Net model\",\"authors\":\"Daocai Wang\",\"doi\":\"10.1016/j.sasc.2025.200216\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>This paper constructs two cross-domain recommendation models based on the perspective of user sharing and non-sharing, both of which rely on intensive convolutional networks and attention mechanisms. This research introduces lightweight Dense Net and fine-grained pruning for model optimization. Lightweight Dense Net retains the core advantages by optimizing the repeat structure while reducing redundant parameters. Compared with the original network, the accuracy loss is not >2 %, the number of parameters is reduced to 204.96Mb, the compression ratio is 8.38, and the computational amount is reduced by 0.96Gflops, which facilitates the hardware deployment. Given the problem that lightweight Dense Net has no practical optimization in storage and computing after sparsing, this paper innovatively proposes a CSB compression storage method and supporting sparse convolution algorithm, which can effectively reduce the computing and storage requirements of inference network, realize the real computing acceleration and storage optimization, and overcome the hardware deployment problems. Compared with the original network, the accuracy loss is not >2 %, the number of parameters is reduced to 204.96Mb, the compression ratio is 8.38, and the computational amount is reduced by 0.96Gflops, which facilitates the hardware deployment. Given the problem that lightweight Dense Net has no practical optimization in storage and computing after sparsing, this paper innovatively proposes a CSB compression storage method and supporting sparse convolution algorithm, which can effectively reduce the computing and storage requirements of inference network, realize the real computing acceleration and storage optimization, and overcome the hardware deployment problems.</div></div>\",\"PeriodicalId\":101205,\"journal\":{\"name\":\"Systems and Soft Computing\",\"volume\":\"7 \",\"pages\":\"Article 200216\"},\"PeriodicalIF\":3.6000,\"publicationDate\":\"2025-03-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Systems and Soft Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2772941925000341\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Systems and Soft Computing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772941925000341","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Research on e-commerce special commodity recommendation system based on attention mechanism and Dense Net model
This paper constructs two cross-domain recommendation models based on the perspective of user sharing and non-sharing, both of which rely on intensive convolutional networks and attention mechanisms. This research introduces lightweight Dense Net and fine-grained pruning for model optimization. Lightweight Dense Net retains the core advantages by optimizing the repeat structure while reducing redundant parameters. Compared with the original network, the accuracy loss is not >2 %, the number of parameters is reduced to 204.96Mb, the compression ratio is 8.38, and the computational amount is reduced by 0.96Gflops, which facilitates the hardware deployment. Given the problem that lightweight Dense Net has no practical optimization in storage and computing after sparsing, this paper innovatively proposes a CSB compression storage method and supporting sparse convolution algorithm, which can effectively reduce the computing and storage requirements of inference network, realize the real computing acceleration and storage optimization, and overcome the hardware deployment problems. Compared with the original network, the accuracy loss is not >2 %, the number of parameters is reduced to 204.96Mb, the compression ratio is 8.38, and the computational amount is reduced by 0.96Gflops, which facilitates the hardware deployment. Given the problem that lightweight Dense Net has no practical optimization in storage and computing after sparsing, this paper innovatively proposes a CSB compression storage method and supporting sparse convolution algorithm, which can effectively reduce the computing and storage requirements of inference network, realize the real computing acceleration and storage optimization, and overcome the hardware deployment problems.