PROVISIONING LARGE-SCALED DATA WITH PARAMETERIZED QUERY PLANS: A CASE STUDY

Z. Pólkowski, S. Mishra
{"title":"PROVISIONING LARGE-SCALED DATA WITH PARAMETERIZED QUERY PLANS: A CASE STUDY","authors":"Z. Pólkowski, S. Mishra","doi":"10.32010/26166127.2021.4.1.3.14","DOIUrl":null,"url":null,"abstract":"In a general scenario, the approaches linked to the innovation of large-scaled data seem ordinary; the informational measures of such aspects can differ based on the applications as these are associated with different attributes that may support high data volumes high data quality. Accordingly, the challenges can be identified with an assurance of high-level protection and data transformation with enhanced operation quality. Based on large-scale data applications in different virtual servers, it is clear that the information can be measured by enlisting the sources linked to sensors networked and provisioned by the analysts. Therefore, it is very much essential to track the relevance and issues with enormous information. While aiming towards knowledge extraction, applying large-scaled data may involve the analytical aspects to predict future events. Accordingly, the soft computing approach can be implemented in such cases to carry out the analysis. During the analysis of large-scale data, it is essential to abide by the rules associated with security measures because preserving sensitive information is the biggest challenge while dealing with large-scale data. As high risk is observed in such data analysis, security measures can be enhanced by having provisioned with authentication and authorization. Indeed, the major obstacles linked to the techniques while analyzing the data are prohibited during security and scalability. The integral methods towards application on data possess a better impact on scalability. It is observed that the faster scaling factor of data on the processor embeds some processing elements to the system. Therefore, it is required to address the challenges linked to processors correlating with process visualization and scalability.","PeriodicalId":275688,"journal":{"name":"Azerbaijan Journal of High Performance Computing","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Azerbaijan Journal of High Performance Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.32010/26166127.2021.4.1.3.14","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In a general scenario, the approaches linked to the innovation of large-scaled data seem ordinary; the informational measures of such aspects can differ based on the applications as these are associated with different attributes that may support high data volumes high data quality. Accordingly, the challenges can be identified with an assurance of high-level protection and data transformation with enhanced operation quality. Based on large-scale data applications in different virtual servers, it is clear that the information can be measured by enlisting the sources linked to sensors networked and provisioned by the analysts. Therefore, it is very much essential to track the relevance and issues with enormous information. While aiming towards knowledge extraction, applying large-scaled data may involve the analytical aspects to predict future events. Accordingly, the soft computing approach can be implemented in such cases to carry out the analysis. During the analysis of large-scale data, it is essential to abide by the rules associated with security measures because preserving sensitive information is the biggest challenge while dealing with large-scale data. As high risk is observed in such data analysis, security measures can be enhanced by having provisioned with authentication and authorization. Indeed, the major obstacles linked to the techniques while analyzing the data are prohibited during security and scalability. The integral methods towards application on data possess a better impact on scalability. It is observed that the faster scaling factor of data on the processor embeds some processing elements to the system. Therefore, it is required to address the challenges linked to processors correlating with process visualization and scalability.
使用参数化查询计划提供大规模数据:一个案例研究
在一般情况下,与大规模数据创新相关的方法似乎很普通;这些方面的信息度量可能因应用程序而异,因为这些方面与可能支持高数据量和高数据质量的不同属性相关联。因此,可以在确保高水平保护和数据转换的同时,提高运营质量,识别挑战。基于不同虚拟服务器中的大规模数据应用程序,很明显,可以通过招募与网络传感器连接的源来测量信息,并由分析人员提供。因此,对海量信息的相关性和问题进行跟踪是非常必要的。在以知识提取为目标的同时,应用大规模数据可能涉及到分析方面来预测未来事件。因此,在这种情况下,可以采用软计算方法进行分析。在分析大规模数据时,必须遵守与安全措施相关的规则,因为在处理大规模数据时,保存敏感信息是最大的挑战。由于在此类数据分析中存在高风险,因此可以通过提供身份验证和授权来增强安全措施。实际上,在分析数据时与技术相关的主要障碍是在安全性和可伸缩性期间被禁止的。数据应用的集成方法对可扩展性有更好的影响。观察到,处理器上数据的快速缩放因子将一些处理元素嵌入到系统中。因此,需要解决与过程可视化和可伸缩性相关的处理器相关的挑战。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信