安全,加速筛选框架支持张量机

IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Xiao Li , Hongmei Wang , Yitian Xu
{"title":"安全,加速筛选框架支持张量机","authors":"Xiao Li ,&nbsp;Hongmei Wang ,&nbsp;Yitian Xu","doi":"10.1016/j.neunet.2025.107458","DOIUrl":null,"url":null,"abstract":"<div><div>Support Tensor Machines (STMs) constitute an effective supervised learning method for classifying high-dimensional tensor data. However, traditional iterative solving methods are often time-consuming. To effectively address the issue of lengthy training times, inspired by the safe screening strategies employed in support vector machines, we generalize the safe screening rule to the tensor domain and propose a novel safe screening rule for STM, which includes the dual static screening rule (DSSR), the dynamic screening rule (DGSR), and a subsequent checking verification. The screening rule initially employs variational inequalities to screen out a portion of redundant samples before training, reducing the problem scale. During the training process, the rule further accelerates training by iteratively screening redundant samples using the duality gap. We also design a subsequent checking technique based on optimality conditions to guarantee the safety of the screening rule. Building on this, we also develop a flexible safe screening framework, referred to as DS-DGSR, which incorporates the DSSR and the DGSR. It not only tackles the challenges of combining various tensor decomposition methods and the diverse scenarios of the decomposed coefficient parameter and decomposed samples in STMs, but also offers flexible adaptation and application according to the characteristics of different STMs. Numerical experiments on multiple real-world high-dimensional tensor datasets confirm the effectiveness and feasibility of DS-DGSR.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107458"},"PeriodicalIF":6.0000,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Safe and accelerated screening framework for support tensor machines\",\"authors\":\"Xiao Li ,&nbsp;Hongmei Wang ,&nbsp;Yitian Xu\",\"doi\":\"10.1016/j.neunet.2025.107458\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Support Tensor Machines (STMs) constitute an effective supervised learning method for classifying high-dimensional tensor data. However, traditional iterative solving methods are often time-consuming. To effectively address the issue of lengthy training times, inspired by the safe screening strategies employed in support vector machines, we generalize the safe screening rule to the tensor domain and propose a novel safe screening rule for STM, which includes the dual static screening rule (DSSR), the dynamic screening rule (DGSR), and a subsequent checking verification. The screening rule initially employs variational inequalities to screen out a portion of redundant samples before training, reducing the problem scale. During the training process, the rule further accelerates training by iteratively screening redundant samples using the duality gap. We also design a subsequent checking technique based on optimality conditions to guarantee the safety of the screening rule. Building on this, we also develop a flexible safe screening framework, referred to as DS-DGSR, which incorporates the DSSR and the DGSR. It not only tackles the challenges of combining various tensor decomposition methods and the diverse scenarios of the decomposed coefficient parameter and decomposed samples in STMs, but also offers flexible adaptation and application according to the characteristics of different STMs. Numerical experiments on multiple real-world high-dimensional tensor datasets confirm the effectiveness and feasibility of DS-DGSR.</div></div>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"188 \",\"pages\":\"Article 107458\"},\"PeriodicalIF\":6.0000,\"publicationDate\":\"2025-04-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0893608025003375\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025003375","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

支持张量机(STMs)是一种有效的高维张量数据分类的监督学习方法。然而,传统的迭代求解方法往往耗时。为了有效解决训练时间过长的问题,受支持向量机安全筛选策略的启发,我们将安全筛选规则推广到张量域,提出了一种新的STM安全筛选规则,该规则包括双静态筛选规则(DSSR)、动态筛选规则(DGSR)和后续的检查验证。筛选规则最初采用变分不等式在训练前筛选出一部分冗余样本,减小问题规模。在训练过程中,该规则利用对偶间隙迭代筛选冗余样本,进一步加快训练速度。为了保证筛选规则的安全性,设计了一种基于最优条件的后续校验技术。在此基础上,我们还开发了一个灵活的安全筛选框架,称为DS-DGSR,它结合了DSSR和DGSR。它不仅解决了多种张量分解方法相结合的挑战,以及分解系数参数和分解样本在stm中的不同场景,而且根据不同stm的特点提供了灵活的适应和应用。在多个实际高维张量数据集上的数值实验验证了DS-DGSR的有效性和可行性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Safe and accelerated screening framework for support tensor machines
Support Tensor Machines (STMs) constitute an effective supervised learning method for classifying high-dimensional tensor data. However, traditional iterative solving methods are often time-consuming. To effectively address the issue of lengthy training times, inspired by the safe screening strategies employed in support vector machines, we generalize the safe screening rule to the tensor domain and propose a novel safe screening rule for STM, which includes the dual static screening rule (DSSR), the dynamic screening rule (DGSR), and a subsequent checking verification. The screening rule initially employs variational inequalities to screen out a portion of redundant samples before training, reducing the problem scale. During the training process, the rule further accelerates training by iteratively screening redundant samples using the duality gap. We also design a subsequent checking technique based on optimality conditions to guarantee the safety of the screening rule. Building on this, we also develop a flexible safe screening framework, referred to as DS-DGSR, which incorporates the DSSR and the DGSR. It not only tackles the challenges of combining various tensor decomposition methods and the diverse scenarios of the decomposed coefficient parameter and decomposed samples in STMs, but also offers flexible adaptation and application according to the characteristics of different STMs. Numerical experiments on multiple real-world high-dimensional tensor datasets confirm the effectiveness and feasibility of DS-DGSR.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信