高维数据的可扩展神经网络算法

Mukesh Soni, Marwan Ali Shnan, Y. Bengio
{"title":"高维数据的可扩展神经网络算法","authors":"Mukesh Soni, Marwan Ali Shnan, Y. Bengio","doi":"10.58496/mjbd/2023/001","DOIUrl":null,"url":null,"abstract":"The boundary for machine learning engineers lately has moved from the restricted data to the algorithms' failure to involve every one of the data in the time permitted. Due of this, scientists are presently worried about the adaptability of machine learning algorithms notwithstanding their exactness. The key to success for many computer vision and machine learning challenges is having big training sets. A few published systematic reviews were taken into account in this topic. Recent systematic reviews may include both more recent and older research on the subject under study. Thus, the publications we examined were all recent. The review utilized information that were gathered somewhere in the range of 2010 and 2021. System: In this paper, we make a modified brain organization to eliminate possible components from extremely high layered datasets. Both a totaled level and an exceptionally fine-grained level of translation are feasible for these highlights. It is basically as easy to grasp non-straight connections as it is a direct relapse. We utilize the method on a dataset for item returns in web based shopping that has 15,555 aspects and 5,659,676 all out exchanges. Result and conclusion: We compare 87 various models to show that our approach not only produces higher predicted accuracy than existing techniques, but is also interpretable. The outcomes show that feature selection is a useful strategy for enhancing scalability. The method is sufficiently abstract to be used with many different analytics datasets","PeriodicalId":325612,"journal":{"name":"Mesopotamian Journal of Big Data","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Scalable Neural Network Algorithms for High Dimensional Data\",\"authors\":\"Mukesh Soni, Marwan Ali Shnan, Y. Bengio\",\"doi\":\"10.58496/mjbd/2023/001\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The boundary for machine learning engineers lately has moved from the restricted data to the algorithms' failure to involve every one of the data in the time permitted. Due of this, scientists are presently worried about the adaptability of machine learning algorithms notwithstanding their exactness. The key to success for many computer vision and machine learning challenges is having big training sets. A few published systematic reviews were taken into account in this topic. Recent systematic reviews may include both more recent and older research on the subject under study. Thus, the publications we examined were all recent. The review utilized information that were gathered somewhere in the range of 2010 and 2021. System: In this paper, we make a modified brain organization to eliminate possible components from extremely high layered datasets. Both a totaled level and an exceptionally fine-grained level of translation are feasible for these highlights. It is basically as easy to grasp non-straight connections as it is a direct relapse. We utilize the method on a dataset for item returns in web based shopping that has 15,555 aspects and 5,659,676 all out exchanges. Result and conclusion: We compare 87 various models to show that our approach not only produces higher predicted accuracy than existing techniques, but is also interpretable. The outcomes show that feature selection is a useful strategy for enhancing scalability. The method is sufficiently abstract to be used with many different analytics datasets\",\"PeriodicalId\":325612,\"journal\":{\"name\":\"Mesopotamian Journal of Big Data\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Mesopotamian Journal of Big Data\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.58496/mjbd/2023/001\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mesopotamian Journal of Big Data","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.58496/mjbd/2023/001","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

最近,机器学习工程师的界限已经从有限的数据转移到算法无法在允许的时间内涉及每一个数据。正因为如此,科学家们目前担心机器学习算法的适应性,尽管它们很精确。对于许多计算机视觉和机器学习挑战来说,成功的关键在于拥有庞大的训练集。本专题参考了一些已发表的系统综述。最近的系统综述可能包括对所研究主题的较新的和较旧的研究。因此,我们研究的出版物都是最近的。该审查使用的信息是在2010年至2021年之间收集的。系统:在本文中,我们制作了一个改进的大脑组织,以消除来自极高层次数据集的可能组件。对于这些突出部分,总的翻译级别和非常细粒度的翻译级别都是可行的。它基本上很容易掌握非直接联系,因为它是一个直接的复发。我们在一个基于网络购物的商品退货数据集上使用该方法,该数据集有15,555个方面和5,659,676个全部交换。结果与结论:我们比较了87种不同的模型,结果表明,我们的方法不仅比现有技术具有更高的预测精度,而且具有可解释性。结果表明,特征选择是提高可伸缩性的有效策略。该方法足够抽象,可以用于许多不同的分析数据集
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Scalable Neural Network Algorithms for High Dimensional Data
The boundary for machine learning engineers lately has moved from the restricted data to the algorithms' failure to involve every one of the data in the time permitted. Due of this, scientists are presently worried about the adaptability of machine learning algorithms notwithstanding their exactness. The key to success for many computer vision and machine learning challenges is having big training sets. A few published systematic reviews were taken into account in this topic. Recent systematic reviews may include both more recent and older research on the subject under study. Thus, the publications we examined were all recent. The review utilized information that were gathered somewhere in the range of 2010 and 2021. System: In this paper, we make a modified brain organization to eliminate possible components from extremely high layered datasets. Both a totaled level and an exceptionally fine-grained level of translation are feasible for these highlights. It is basically as easy to grasp non-straight connections as it is a direct relapse. We utilize the method on a dataset for item returns in web based shopping that has 15,555 aspects and 5,659,676 all out exchanges. Result and conclusion: We compare 87 various models to show that our approach not only produces higher predicted accuracy than existing techniques, but is also interpretable. The outcomes show that feature selection is a useful strategy for enhancing scalability. The method is sufficiently abstract to be used with many different analytics datasets
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信