结合非策略强化学习算法的粒子群优化高效用项集发现方法

IF 2 4区 计算机科学 Q3 AUTOMATION & CONTROL SYSTEMS
K. Logeswaran, P. Suresh, S. Anandamurugan
{"title":"结合非策略强化学习算法的粒子群优化高效用项集发现方法","authors":"K. Logeswaran, P. Suresh, S. Anandamurugan","doi":"10.5755/j01.itc.52.1.31949","DOIUrl":null,"url":null,"abstract":"Mining of High Utility Itemset (HUI) is an area of high importance in data mining that involves numerous methodologies for addressing it effectively. When the diversity of items and size of an item is quite vast in the given dataset, then the problem search space that needs to be solved by conventional exact approaches to High Utility Itemset Mining (HUIM) also increases in terms of exponential. This factual issue has made the researchers to choose alternate yet efficient approaches based on Evolutionary Computation (EC) to solve the HUIM problem. Particle Swarm Optimization (PSO) is an EC-based approach that has drawn the attention of many researchers to unravel different NP-Hard problems in real-time. Variants of PSO techniques have been established in recent years to increase the efficiency of the HUIs mining process. In PSO, the Minimization of execution time and generation of reasonable decent solutions were greatly influenced by the PSO control parameters namely Acceleration Coefficient and  and Inertia Weight. The proposed approach is called Adaptive Particle Swarm Optimization using Reinforcement Learning with Off Policy (APSO-RLOFF), which employs the Reinforcement Learning (RL) concept to achieve the adaptive online calibration of PSO control and, in turn, to increase the performance of PSO. The state-of-the-art RL approach called the Q-Learning algorithm is employed in the APSO-RLOFF approach. In RL, state-action utility values are estimated during each episode using Q-Learning. Extensive tests are carried out on four benchmark datasets to evaluate the performance of the suggested technique. An exact approach called HUP-Miner and three EC-based approaches, namely HUPEUMU-GRAM, HUIM-BPSO, and AGA_RLOFF, are used to relate the performance of the anticipated approach. From the outcome, it is inferred that the performance metrics of APSO-RLOFF, namely no of discovered HUIs and execution time, outstrip the previously considered EC computations.\n ","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"5 1","pages":"25-36"},"PeriodicalIF":2.0000,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Particle Swarm Optimization Method Combined with off Policy Reinforcement Learning Algorithm for the Discovery of High Utility Itemset\",\"authors\":\"K. Logeswaran, P. Suresh, S. Anandamurugan\",\"doi\":\"10.5755/j01.itc.52.1.31949\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Mining of High Utility Itemset (HUI) is an area of high importance in data mining that involves numerous methodologies for addressing it effectively. When the diversity of items and size of an item is quite vast in the given dataset, then the problem search space that needs to be solved by conventional exact approaches to High Utility Itemset Mining (HUIM) also increases in terms of exponential. This factual issue has made the researchers to choose alternate yet efficient approaches based on Evolutionary Computation (EC) to solve the HUIM problem. Particle Swarm Optimization (PSO) is an EC-based approach that has drawn the attention of many researchers to unravel different NP-Hard problems in real-time. Variants of PSO techniques have been established in recent years to increase the efficiency of the HUIs mining process. In PSO, the Minimization of execution time and generation of reasonable decent solutions were greatly influenced by the PSO control parameters namely Acceleration Coefficient and  and Inertia Weight. The proposed approach is called Adaptive Particle Swarm Optimization using Reinforcement Learning with Off Policy (APSO-RLOFF), which employs the Reinforcement Learning (RL) concept to achieve the adaptive online calibration of PSO control and, in turn, to increase the performance of PSO. The state-of-the-art RL approach called the Q-Learning algorithm is employed in the APSO-RLOFF approach. In RL, state-action utility values are estimated during each episode using Q-Learning. Extensive tests are carried out on four benchmark datasets to evaluate the performance of the suggested technique. An exact approach called HUP-Miner and three EC-based approaches, namely HUPEUMU-GRAM, HUIM-BPSO, and AGA_RLOFF, are used to relate the performance of the anticipated approach. From the outcome, it is inferred that the performance metrics of APSO-RLOFF, namely no of discovered HUIs and execution time, outstrip the previously considered EC computations.\\n \",\"PeriodicalId\":54982,\"journal\":{\"name\":\"Information Technology and Control\",\"volume\":\"5 1\",\"pages\":\"25-36\"},\"PeriodicalIF\":2.0000,\"publicationDate\":\"2023-03-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Technology and Control\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.5755/j01.itc.52.1.31949\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Technology and Control","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.5755/j01.itc.52.1.31949","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 1

摘要

高效用项集挖掘(High Utility Itemset, HUI)是数据挖掘中一个非常重要的领域,它涉及许多有效解决它的方法。当给定数据集中项目的多样性和项目的大小相当大时,传统的高效用项目集挖掘(High Utility Itemset Mining, HUIM)精确方法需要解决的问题搜索空间也呈指数级增长。这一现实问题促使研究人员选择基于进化计算(EC)的替代但有效的方法来解决HUIM问题。粒子群算法(PSO)是一种基于粒子群算法的求解NP-Hard问题的实时求解方法。近年来建立了各种PSO技术,以提高hui采矿过程的效率。在粒子群算法中,粒子群控制参数加速度系数和惯性权值对执行时间的最小化和合理体面解的生成有很大的影响。提出的方法被称为基于关闭策略的强化学习自适应粒子群优化(APSO-RLOFF),它采用强化学习(RL)的概念来实现粒子群控制的自适应在线校准,从而提高粒子群控制的性能。在APSO-RLOFF方法中采用了最先进的强化学习方法Q-Learning算法。在强化学习中,使用Q-Learning在每个情节中估计状态-行动效用值。在四个基准数据集上进行了广泛的测试,以评估所建议技术的性能。一种称为HUP-Miner的精确方法和三种基于ec的方法,即HUPEUMU-GRAM, HUIM-BPSO和AGA_RLOFF,用于关联预期方法的性能。从结果可以推断,APSO-RLOFF的性能指标,即未发现hui和执行时间,超过了之前考虑的EC计算。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Particle Swarm Optimization Method Combined with off Policy Reinforcement Learning Algorithm for the Discovery of High Utility Itemset
Mining of High Utility Itemset (HUI) is an area of high importance in data mining that involves numerous methodologies for addressing it effectively. When the diversity of items and size of an item is quite vast in the given dataset, then the problem search space that needs to be solved by conventional exact approaches to High Utility Itemset Mining (HUIM) also increases in terms of exponential. This factual issue has made the researchers to choose alternate yet efficient approaches based on Evolutionary Computation (EC) to solve the HUIM problem. Particle Swarm Optimization (PSO) is an EC-based approach that has drawn the attention of many researchers to unravel different NP-Hard problems in real-time. Variants of PSO techniques have been established in recent years to increase the efficiency of the HUIs mining process. In PSO, the Minimization of execution time and generation of reasonable decent solutions were greatly influenced by the PSO control parameters namely Acceleration Coefficient and  and Inertia Weight. The proposed approach is called Adaptive Particle Swarm Optimization using Reinforcement Learning with Off Policy (APSO-RLOFF), which employs the Reinforcement Learning (RL) concept to achieve the adaptive online calibration of PSO control and, in turn, to increase the performance of PSO. The state-of-the-art RL approach called the Q-Learning algorithm is employed in the APSO-RLOFF approach. In RL, state-action utility values are estimated during each episode using Q-Learning. Extensive tests are carried out on four benchmark datasets to evaluate the performance of the suggested technique. An exact approach called HUP-Miner and three EC-based approaches, namely HUPEUMU-GRAM, HUIM-BPSO, and AGA_RLOFF, are used to relate the performance of the anticipated approach. From the outcome, it is inferred that the performance metrics of APSO-RLOFF, namely no of discovered HUIs and execution time, outstrip the previously considered EC computations.  
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information Technology and Control
Information Technology and Control 工程技术-计算机:人工智能
CiteScore
2.70
自引率
9.10%
发文量
36
审稿时长
12 months
期刊介绍: Periodical journal covers a wide field of computer science and control systems related problems including: -Software and hardware engineering; -Management systems engineering; -Information systems and databases; -Embedded systems; -Physical systems modelling and application; -Computer networks and cloud computing; -Data visualization; -Human-computer interface; -Computer graphics, visual analytics, and multimedia systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信