Kehinde Olobatuyi, Matthew R. P. Parker, Oludare Ariyo
{"title":"基于TSNE算法的高维数据聚类加权模型","authors":"Kehinde Olobatuyi, Matthew R. P. Parker, Oludare Ariyo","doi":"10.1007/s41060-023-00422-8","DOIUrl":null,"url":null,"abstract":"Cluster-weighted models (CWMs) are an important class of machine learning models that are commonly used for modelling complex datasets. However, they are known to suffer from reduced computing efficiency and estimator accuracy when dealing with high-dimensional data. Previous work has proposed a parsimonious technique that can improve CWMs’ performance in the high-dimensional data paradigm. However, this method has a setback for very high-dimensional data, where the dimensionality is greater than 100. In this paper, we propose a new hybridised method that incorporates a dimensionality reduction technique called T-distributed stochastic neighbour embedding (TSNE) to enhance the parsimonious CWMs in high-dimensional space. Additionally, we introduce a novel heuristic for detecting the hidden components of the underlying mixture model, which can be used with the popular R package FlexCWM. We evaluated the performance of the proposed method using two real datasets and found that it improves clustering power when compared to both the parsimony methods and the TSNE methods combined with CWMs in the high-dimensional data setting. Our results suggest that the proposed method can improve the efficiency and accuracy of CWMs in dealing with high-dimensional data, making it a valuable tool for data scientists and statisticians.","PeriodicalId":45667,"journal":{"name":"International Journal of Data Science and Analytics","volume":"15 1","pages":"0"},"PeriodicalIF":3.4000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Cluster weighted model based on TSNE algorithm for high-dimensional data\",\"authors\":\"Kehinde Olobatuyi, Matthew R. P. Parker, Oludare Ariyo\",\"doi\":\"10.1007/s41060-023-00422-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Cluster-weighted models (CWMs) are an important class of machine learning models that are commonly used for modelling complex datasets. However, they are known to suffer from reduced computing efficiency and estimator accuracy when dealing with high-dimensional data. Previous work has proposed a parsimonious technique that can improve CWMs’ performance in the high-dimensional data paradigm. However, this method has a setback for very high-dimensional data, where the dimensionality is greater than 100. In this paper, we propose a new hybridised method that incorporates a dimensionality reduction technique called T-distributed stochastic neighbour embedding (TSNE) to enhance the parsimonious CWMs in high-dimensional space. Additionally, we introduce a novel heuristic for detecting the hidden components of the underlying mixture model, which can be used with the popular R package FlexCWM. We evaluated the performance of the proposed method using two real datasets and found that it improves clustering power when compared to both the parsimony methods and the TSNE methods combined with CWMs in the high-dimensional data setting. Our results suggest that the proposed method can improve the efficiency and accuracy of CWMs in dealing with high-dimensional data, making it a valuable tool for data scientists and statisticians.\",\"PeriodicalId\":45667,\"journal\":{\"name\":\"International Journal of Data Science and Analytics\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2023-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Data Science and Analytics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s41060-023-00422-8\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Data Science and Analytics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s41060-023-00422-8","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Cluster weighted model based on TSNE algorithm for high-dimensional data
Cluster-weighted models (CWMs) are an important class of machine learning models that are commonly used for modelling complex datasets. However, they are known to suffer from reduced computing efficiency and estimator accuracy when dealing with high-dimensional data. Previous work has proposed a parsimonious technique that can improve CWMs’ performance in the high-dimensional data paradigm. However, this method has a setback for very high-dimensional data, where the dimensionality is greater than 100. In this paper, we propose a new hybridised method that incorporates a dimensionality reduction technique called T-distributed stochastic neighbour embedding (TSNE) to enhance the parsimonious CWMs in high-dimensional space. Additionally, we introduce a novel heuristic for detecting the hidden components of the underlying mixture model, which can be used with the popular R package FlexCWM. We evaluated the performance of the proposed method using two real datasets and found that it improves clustering power when compared to both the parsimony methods and the TSNE methods combined with CWMs in the high-dimensional data setting. Our results suggest that the proposed method can improve the efficiency and accuracy of CWMs in dealing with high-dimensional data, making it a valuable tool for data scientists and statisticians.
期刊介绍:
Data Science has been established as an important emergent scientific field and paradigm driving research evolution in such disciplines as statistics, computing science and intelligence science, and practical transformation in such domains as science, engineering, the public sector, business, social science, and lifestyle. The field encompasses the larger areas of artificial intelligence, data analytics, machine learning, pattern recognition, natural language understanding, and big data manipulation. It also tackles related new scientific challenges, ranging from data capture, creation, storage, retrieval, sharing, analysis, optimization, and visualization, to integrative analysis across heterogeneous and interdependent complex resources for better decision-making, collaboration, and, ultimately, value creation.The International Journal of Data Science and Analytics (JDSA) brings together thought leaders, researchers, industry practitioners, and potential users of data science and analytics, to develop the field, discuss new trends and opportunities, exchange ideas and practices, and promote transdisciplinary and cross-domain collaborations. The journal is composed of three streams: Regular, to communicate original and reproducible theoretical and experimental findings on data science and analytics; Applications, to report the significant data science applications to real-life situations; and Trends, to report expert opinion and comprehensive surveys and reviews of relevant areas and topics in data science and analytics.Topics of relevance include all aspects of the trends, scientific foundations, techniques, and applications of data science and analytics, with a primary focus on:statistical and mathematical foundations for data science and analytics;understanding and analytics of complex data, human, domain, network, organizational, social, behavior, and system characteristics, complexities and intelligences;creation and extraction, processing, representation and modelling, learning and discovery, fusion and integration, presentation and visualization of complex data, behavior, knowledge and intelligence;data analytics, pattern recognition, knowledge discovery, machine learning, deep analytics and deep learning, and intelligent processing of various data (including transaction, text, image, video, graph and network), behaviors and systems;active, real-time, personalized, actionable and automated analytics, learning, computation, optimization, presentation and recommendation; big data architecture, infrastructure, computing, matching, indexing, query processing, mapping, search, retrieval, interoperability, exchange, and recommendation;in-memory, distributed, parallel, scalable and high-performance computing, analytics and optimization for big data;review, surveys, trends, prospects and opportunities of data science research, innovation and applications;data science applications, intelligent devices and services in scientific, business, governmental, cultural, behavioral, social and economic, health and medical, human, natural and artificial (including online/Web, cloud, IoT, mobile and social media) domains; andethics, quality, privacy, safety and security, trust, and risk of data science and analytics