Determining the optimal number of clusters by Enhanced Gap Statistic in K-mean algorithm

IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Iliyas Karim Khan , Hanita Binti Daud , Nooraini Binti Zainuddin , Rajalingam Sokkalingam , Muhammad Farooq , Muzammil Elahi Baig , Gohar Ayub , Mudasar Zafar
{"title":"Determining the optimal number of clusters by Enhanced Gap Statistic in K-mean algorithm","authors":"Iliyas Karim Khan ,&nbsp;Hanita Binti Daud ,&nbsp;Nooraini Binti Zainuddin ,&nbsp;Rajalingam Sokkalingam ,&nbsp;Muhammad Farooq ,&nbsp;Muzammil Elahi Baig ,&nbsp;Gohar Ayub ,&nbsp;Mudasar Zafar","doi":"10.1016/j.eij.2024.100504","DOIUrl":null,"url":null,"abstract":"<div><p>Unsupervised learning, particularly K-means clustering, seeks to partition data into clusters with distinct intra-class cohesion and inter-class disparity. However, the arbitrary selection of clusters in K-means introduces challenges, leading to trial and error in determining the Optimal Number of Clusters (ONC). To address this, various methodologies have been devised, among which the Gap Statistic is prominent. Gap Statistic reliance on expected values for reference data selection poses limitations, especially in scenarios involving diverse scale, noise, and overlapping data.</p><p>To tackle these challenges, this study introduces Enhanced Gap Statistic (EGS), which standardizes reference data using an exponential distribution within the Gap Statistic framework, integrating an adjustment factor for a more dependable estimation of the ONC. Application of EGS to K-means clustering facilitates accurate ONC determination. For comparison purposes, EGS is benchmarked against traditional Gap Statistic and other established methods used for ONC selection in K-means, evaluating accuracy and efficiency across datasets with varying characteristics. The results demonstrate EGS superior accuracy and efficiency, affirming its effectiveness in diverse data environments.</p></div>","PeriodicalId":56010,"journal":{"name":"Egyptian Informatics Journal","volume":null,"pages":null},"PeriodicalIF":5.0000,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1110866524000677/pdfft?md5=b38f7fc240484c948d461e5afbf4d41b&pid=1-s2.0-S1110866524000677-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Egyptian Informatics Journal","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1110866524000677","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Unsupervised learning, particularly K-means clustering, seeks to partition data into clusters with distinct intra-class cohesion and inter-class disparity. However, the arbitrary selection of clusters in K-means introduces challenges, leading to trial and error in determining the Optimal Number of Clusters (ONC). To address this, various methodologies have been devised, among which the Gap Statistic is prominent. Gap Statistic reliance on expected values for reference data selection poses limitations, especially in scenarios involving diverse scale, noise, and overlapping data.

To tackle these challenges, this study introduces Enhanced Gap Statistic (EGS), which standardizes reference data using an exponential distribution within the Gap Statistic framework, integrating an adjustment factor for a more dependable estimation of the ONC. Application of EGS to K-means clustering facilitates accurate ONC determination. For comparison purposes, EGS is benchmarked against traditional Gap Statistic and other established methods used for ONC selection in K-means, evaluating accuracy and efficiency across datasets with varying characteristics. The results demonstrate EGS superior accuracy and efficiency, affirming its effectiveness in diverse data environments.

通过 K-mean 算法中的增强差距统计确定最佳聚类数
无监督学习,尤其是 K-means 聚类,旨在将数据划分为具有不同类内内聚力和类间差异的聚类。然而,在 K-means 中任意选择聚类带来了挑战,导致在确定最佳聚类数 (ONC) 时反复试验和出错。为了解决这个问题,人们设计了各种方法,其中差距统计法(Gap Statistic)最为突出。为了应对这些挑战,本研究引入了增强型差距统计法(EGS),该方法在差距统计法框架内使用指数分布对参考数据进行标准化,并整合了一个调整因子,从而更可靠地估算出最佳簇数(ONC)。将 EGS 应用于 K-means 聚类有助于准确确定 ONC。为了进行比较,EGS 与传统的差距统计法和 K-means 中用于选择 ONC 的其他成熟方法进行了基准比较,评估了不同特征数据集的准确性和效率。结果表明,EGS 具有卓越的准确性和效率,证明了其在不同数据环境中的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Egyptian Informatics Journal
Egyptian Informatics Journal Decision Sciences-Management Science and Operations Research
CiteScore
11.10
自引率
1.90%
发文量
59
审稿时长
110 days
期刊介绍: The Egyptian Informatics Journal is published by the Faculty of Computers and Artificial Intelligence, Cairo University. This Journal provides a forum for the state-of-the-art research and development in the fields of computing, including computer sciences, information technologies, information systems, operations research and decision support. Innovative and not-previously-published work in subjects covered by the Journal is encouraged to be submitted, whether from academic, research or commercial sources.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信