Comparing Accuracies of Rule Evaluation Models to Determine Human Criteria on Evaluated Rule Sets

H. Abe, S. Tsumoto
{"title":"Comparing Accuracies of Rule Evaluation Models to Determine Human Criteria on Evaluated Rule Sets","authors":"H. Abe, S. Tsumoto","doi":"10.1109/ICDMW.2008.49","DOIUrl":null,"url":null,"abstract":"In data mining post-processing, rule selection using objective rule evaluation indices is one of a useful method to find out valuable knowledge from mined patterns. However, the relationship between an index value and experts' criteria has never been clarified. In this study, we have compared the accuracies of classification learning algorithms for datasets with randomized class distributions and real human evaluations. As a method to determine the relationship, we used rule evaluation models, which are learned from a dataset consisting of objective rule evaluation indices and evaluation labels for each rule. Then, the results show that accuracies of classification learning algorithms with/without criteria of human experts are different on a balanced randomized class distribution. With regarding to the results, we can consider about a way to distinguish randomly evaluated rules using the accuracies of multiple learning algorithms.","PeriodicalId":175955,"journal":{"name":"2008 IEEE International Conference on Data Mining Workshops","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 IEEE International Conference on Data Mining Workshops","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDMW.2008.49","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

In data mining post-processing, rule selection using objective rule evaluation indices is one of a useful method to find out valuable knowledge from mined patterns. However, the relationship between an index value and experts' criteria has never been clarified. In this study, we have compared the accuracies of classification learning algorithms for datasets with randomized class distributions and real human evaluations. As a method to determine the relationship, we used rule evaluation models, which are learned from a dataset consisting of objective rule evaluation indices and evaluation labels for each rule. Then, the results show that accuracies of classification learning algorithms with/without criteria of human experts are different on a balanced randomized class distribution. With regarding to the results, we can consider about a way to distinguish randomly evaluated rules using the accuracies of multiple learning algorithms.
比较规则评估模型的准确性以确定评估规则集上的人为标准
在数据挖掘后处理中,利用客观规则评价指标进行规则选择是从挖掘的模式中发现有价值知识的有效方法之一。然而,指标值与专家标准之间的关系从未得到澄清。在这项研究中,我们比较了随机分类分布和真实人类评估数据集的分类学习算法的准确性。作为一种确定关系的方法,我们使用了规则评价模型,该模型是从由客观规则评价指标和每个规则的评价标签组成的数据集中学习到的。然后,研究结果表明,在平衡随机分类分布下,有和没有人类专家标准的分类学习算法的准确率是不同的。针对这些结果,我们可以考虑一种利用多种学习算法的准确率来区分随机评估规则的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信