IN-MEMORY INTELLIGENT COMPUTING

V. Hahanov, V. H. Abdullayev, S. V. Chumachenko, E. I. Lytvynova, I. V. Hahanova
{"title":"IN-MEMORY INTELLIGENT COMPUTING","authors":"V. Hahanov, V. H. Abdullayev, S. V. Chumachenko, E. I. Lytvynova, I. V. Hahanova","doi":"10.15588/1607-3274-2024-1-15","DOIUrl":null,"url":null,"abstract":"Context. Processed big data has social significance for the development of society and industry. Intelligent processing of big data is a condition for creating a collective mind of a social group, company, state and the planet as a whole. At the same time, the economy of big data (Data Economy) takes first place in the evaluation of processing mechanisms, since two parameters are very important: speed of data processing and energy consumption. Therefore, mechanisms focused on parallel processing of large data within the data storage center will always be in demand on the IT market. \nObjective. The goal of the investigation is to increase the economy of big data (Data Economy) thanks to the analysis of data as truth table addresses for the identification of patterns of production functionalities based on the similarity-difference metric. \nMethod. Intelligent computing architectures are proposed for managing cyber-social processes based on monitoring and analysis of big data. It is proposed to process big data as truth table addresses to solve the problems of identification, clustering, and classification of patterns of social and production processes. A family of automata is offered for the analysis of big data, such as addresses. The truth table is considered as a reasonable form of explicit data structures that have a useful constant – a standard address routing order. The goal of processing big data is to make it structured using a truth table for further identification before making actuator decisions. The truth table is considered as a mechanism for parallel structuring and packing of large data in its column to determine their similarity-difference and to equate data at the same addresses. Representation of data as addresses is associated with unitary encoding of patterns by binary vectors on the found universe of primitive data. The mechanism is focused on processorless data processing based on read-write transactions using in-memory computing technology with significant time and energy savings. The metric of truth table big data processing is parallelism, technological simplicity, and linear computational complexity. The price for such advantages is the exponential memory costs of storing explicit structured data. \nResults. Parallel algorithms of in-memory computing are proposed for economic mechanisms of transformation of large unstructured data, such as addresses, into useful structured data. An in-memory computing architecture with global feedback and an algorithm for matrix parallel processing of large data such as addresses are proposed. It includes a framework for matrix analysis of big data to determine the similarity between vectors that are input to the matrix sequencer. Vector data analysis is transformed into matrix computing for big data processing. The speed of the parallel algorithm for the analysis of big data on the MDV matrix of deductive vectors is linearly dependent on the number of bits of the input vectors or the power of the universe of primitives. A method of identifying patterns using key words has been developed. It is characterized by the use of unitary coded data components for the synthesis of the truth table of the business process. This allows you to use read-write transactions for parallel processing of large data such as addresses. \nConclusions. The scientific novelty consists in the development of the following innovative solutions: 1) a new vector-matrix technology for parallel processing of large data, such as addresses, is proposed, characterized by the use of read-write transactions on matrix memory without the use of processor logic; 2) an in-memory computing architecture with global feedback and an algorithm for matrix parallel processing of large data such as addresses are proposed; 3) a method of identifying patterns using keywords is proposed, which is characterized by the use of unitary coded data components for the synthesis of the truth table of the business process, which makes it possible to use the read-write transaction for parallel processing of large data such as addresses. The practical significance of the study is that any task of artificial intelligence (similarity-difference, classification-clustering and recognition, pattern identification) can be solved technologically simply and efficiently with the help of a truth table (or its derivatives) and unitarily coded big data . Research prospects are related to the implementation of this digital modeling technology devices on the EDA market. KEYWORDS: Intelligent","PeriodicalId":518330,"journal":{"name":"Radio Electronics, Computer Science, Control","volume":"250 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radio Electronics, Computer Science, Control","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.15588/1607-3274-2024-1-15","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Context. Processed big data has social significance for the development of society and industry. Intelligent processing of big data is a condition for creating a collective mind of a social group, company, state and the planet as a whole. At the same time, the economy of big data (Data Economy) takes first place in the evaluation of processing mechanisms, since two parameters are very important: speed of data processing and energy consumption. Therefore, mechanisms focused on parallel processing of large data within the data storage center will always be in demand on the IT market. Objective. The goal of the investigation is to increase the economy of big data (Data Economy) thanks to the analysis of data as truth table addresses for the identification of patterns of production functionalities based on the similarity-difference metric. Method. Intelligent computing architectures are proposed for managing cyber-social processes based on monitoring and analysis of big data. It is proposed to process big data as truth table addresses to solve the problems of identification, clustering, and classification of patterns of social and production processes. A family of automata is offered for the analysis of big data, such as addresses. The truth table is considered as a reasonable form of explicit data structures that have a useful constant – a standard address routing order. The goal of processing big data is to make it structured using a truth table for further identification before making actuator decisions. The truth table is considered as a mechanism for parallel structuring and packing of large data in its column to determine their similarity-difference and to equate data at the same addresses. Representation of data as addresses is associated with unitary encoding of patterns by binary vectors on the found universe of primitive data. The mechanism is focused on processorless data processing based on read-write transactions using in-memory computing technology with significant time and energy savings. The metric of truth table big data processing is parallelism, technological simplicity, and linear computational complexity. The price for such advantages is the exponential memory costs of storing explicit structured data. Results. Parallel algorithms of in-memory computing are proposed for economic mechanisms of transformation of large unstructured data, such as addresses, into useful structured data. An in-memory computing architecture with global feedback and an algorithm for matrix parallel processing of large data such as addresses are proposed. It includes a framework for matrix analysis of big data to determine the similarity between vectors that are input to the matrix sequencer. Vector data analysis is transformed into matrix computing for big data processing. The speed of the parallel algorithm for the analysis of big data on the MDV matrix of deductive vectors is linearly dependent on the number of bits of the input vectors or the power of the universe of primitives. A method of identifying patterns using key words has been developed. It is characterized by the use of unitary coded data components for the synthesis of the truth table of the business process. This allows you to use read-write transactions for parallel processing of large data such as addresses. Conclusions. The scientific novelty consists in the development of the following innovative solutions: 1) a new vector-matrix technology for parallel processing of large data, such as addresses, is proposed, characterized by the use of read-write transactions on matrix memory without the use of processor logic; 2) an in-memory computing architecture with global feedback and an algorithm for matrix parallel processing of large data such as addresses are proposed; 3) a method of identifying patterns using keywords is proposed, which is characterized by the use of unitary coded data components for the synthesis of the truth table of the business process, which makes it possible to use the read-write transaction for parallel processing of large data such as addresses. The practical significance of the study is that any task of artificial intelligence (similarity-difference, classification-clustering and recognition, pattern identification) can be solved technologically simply and efficiently with the help of a truth table (or its derivatives) and unitarily coded big data . Research prospects are related to the implementation of this digital modeling technology devices on the EDA market. KEYWORDS: Intelligent
内存智能计算
背景。经过处理的大数据对社会和行业的发展具有重要的社会意义。大数据的智能处理是创造社会群体、公司、国家和整个地球的集体智慧的条件。同时,大数据的经济性(数据经济)在处理机制的评估中占据首位,因为有两个参数非常重要:数据处理速度和能源消耗。因此,侧重于在数据存储中心内并行处理大数据的机制将始终是 IT 市场的需求。目标。调查的目的是通过将数据作为真值表地址进行分析,根据相似性-差异度量识别生产功能模式,从而提高大数据的经济性(数据经济)。方法。基于对大数据的监测和分析,提出了管理网络社会进程的智能计算架构。建议将大数据作为真值表地址进行处理,以解决社会和生产流程模式的识别、聚类和分类问题。为分析地址等大数据提供了自动机系列。真值表被认为是显式数据结构的一种合理形式,它具有一个有用的常数--标准地址路由顺序。处理大数据的目标是使用真值表使其结构化,以便在做出执行器决策前进一步识别。真值表被视为一种并行结构化机制,可将大数据打包到真值表的列中,以确定它们的异同,并将相同地址的数据等同起来。将数据表示为地址与通过二进制矢量对已发现的原始数据宇宙中的模式进行单元编码有关。该机制的重点是利用内存计算技术进行基于读写事务的无处理器数据处理,可显著节省时间和能源。真值表大数据处理的衡量标准是并行性、技术简单性和线性计算复杂性。这种优势的代价是存储显式结构化数据的指数级内存成本。结果。针对将地址等大型非结构化数据转化为有用结构化数据的经济机制,提出了内存计算并行算法。提出了具有全局反馈功能的内存计算架构和矩阵并行处理地址等大型数据的算法。它包括一个大数据矩阵分析框架,用于确定输入矩阵排序器的向量之间的相似性。矢量数据分析转化为矩阵计算,用于大数据处理。对演绎矢量的 MDV 矩阵进行大数据分析的并行算法的速度与输入矢量的位数或基元宇宙的幂呈线性关系。利用关键词识别模式的方法已经开发出来。其特点是使用单元编码数据组件来合成业务流程的真值表。这样就可以使用读写事务来并行处理地址等大型数据。结论科学创新在于开发了以下创新解决方案:1) 提出了一种用于并行处理地址等大型数据的新向量-矩阵技术,其特点是在矩阵存储器上使用读写事务,而无需使用处理器逻辑;2) 提出了一种具有全局反馈的内存计算架构和一种用于地址等大型数据矩阵并行处理的算法;3)提出了一种利用关键字识别模式的方法,其特点是利用单元编码数据组件合成业务流程的真值表,这使得利用读写事务并行处理地址等大数据成为可能。这项研究的实际意义在于,任何人工智能任务(相似-差异、分类-聚类和识别、模式识别)都可以借助真值表(或其衍生物)和单元编码大数据,在技术上得到简单而高效的解决。研究前景与在 EDA 市场上实施这种数字建模技术设备有关。关键词: 智能
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信