Radio Electronics, Computer Science, Control最新文献

筛选
英文 中文
METHOD OF CREATING A MINIMAL SPANNING TREE ON AN ARBITRARY SUBSET OF VERTICES OF A WEIGHTED UNDIRECTED GRAPH 在加权无向图的任意顶点子集上创建最小生成树的方法
Radio Electronics, Computer Science, Control Pub Date : 2024-04-02 DOI: 10.15588/1607-3274-2024-1-17
V. M. Batsamut, S. O. Hodlevsky, Yu. P. Babkov, D. Morkvin
{"title":"METHOD OF CREATING A MINIMAL SPANNING TREE ON AN ARBITRARY SUBSET OF VERTICES OF A WEIGHTED UNDIRECTED GRAPH","authors":"V. M. Batsamut, S. O. Hodlevsky, Yu. P. Babkov, D. Morkvin","doi":"10.15588/1607-3274-2024-1-17","DOIUrl":"https://doi.org/10.15588/1607-3274-2024-1-17","url":null,"abstract":"Context. The relevance of the article is determined by the need for further development of models for optimal restoration of the connectivity of network objects that have undergone fragmentation due to emergency situations of various origins. The method proposed in this article solves the problematic situation of minimizing the amount of restoration work (total financial costs) when promptly restoring the connectivity of a selected subset of elements of a network object after its fragmentation. \u0000The purpose of the study is to develop a method for creating a minimal spanning tree on an arbitrary subset of vertices of a weighted undirected graph to minimize the amount of restoration work and/or total financial costs when promptly restoring the connectivity of elements that have a higher level of importance in the structure of a fragmented network object. \u0000Method. The developed method is based on the idea of searching for local minima in the structure of a model undirected graph using graph vertices that are not included in the list of base vertices to be united by a minimal spanning tree. When searching for local minima, the concept of an equilateral triangle and a radial structure in such a triangle is used. In this case, there are four types of substructures that provide local minima: first, those with one common base vertex; second, those with two common base vertices; third, those with three common base vertices; fourth, those without common base vertices, located in different parts of the model graph. Those vertices that are not included in the list of basic ones, but through which local minima are ensured, are added to the basic ones. Other vertices (non-basic) along with their incident edges are removed from the structure of the model graph. Then, using one of the well-known methods of forming spanning trees, a minimal spanning tree is formed on the structure obtained in this way, which combines the set of base vertices. \u0000Results. 1) A method for creating a minimal spanning tree on an arbitrary subset of vertices of a weighted undirected graph has been developed. 2) A set of criteria for determining local minima in the structure of the model graph is proposed. 3) The method has been verified on test problems. \u0000Conclusions. The theoretical studies and several experiments confirm the efficiency of the developed method. The solutions developed using the developed method are accurate, which makes it possible to recommend it for practical use in determining strategies for restoring the connectivity of fragmented network objects.","PeriodicalId":518330,"journal":{"name":"Radio Electronics, Computer Science, Control","volume":"68 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140755466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EVALUATION OF THE INFLUENCE OF ENVIRONMENTAL FACTORS AND COGNITIVE PARAMETERS ON THE DECISION-MAKING PROCESS IN HUMAN-MACHINE SYSTEMS OF CRITICAL APPLICATION 评估环境因素和认知参数对关键应用人机系统决策过程的影响
Radio Electronics, Computer Science, Control Pub Date : 2024-04-02 DOI: 10.15588/1607-3274-2024-1-7
V. I. Perederyi, E. Borchik, V. Zosimov, O. Bulgakova
{"title":"EVALUATION OF THE INFLUENCE OF ENVIRONMENTAL FACTORS AND COGNITIVE PARAMETERS ON THE DECISION-MAKING PROCESS IN HUMAN-MACHINE SYSTEMS OF CRITICAL APPLICATION","authors":"V. I. Perederyi, E. Borchik, V. Zosimov, O. Bulgakova","doi":"10.15588/1607-3274-2024-1-7","DOIUrl":"https://doi.org/10.15588/1607-3274-2024-1-7","url":null,"abstract":"Context. A feature of human-machine systems of critical application operating in real time is that they include as elements both technical systems and people interacting with these systems. At the same time, the main difficulties are associated not only with the improvement of hardware and software, but also with the insufficient development of methods for reliably predicting the impact of the production environment on the human factor and, as a result, on the relevance of decisions made by decision makers. As a result, the task of developing methods for determining the mutual influence of environmental factors and cognitive parameters of decision makers on the decision-making process becomes very relevant. \u0000Objective. The aim of the work is to propose methodological foundations for the development and study of fuzzy hierarchical relational cognitive models to determine the influence of environmental factors and cognitive parameters of decision makers on the DMP. \u0000Method. When building FHRCM methods of “soft computing”, methodologies of cognitive and fuzzy cognitive modeling were used, providing an acceptable formalization uncertainty of mutual influence of factors on the DMP. \u0000Results. A fuzzy cognitive model based on a fuzzy Bayesian belief network has been developed, which makes it possible to draw a connection between qualitative and quantitative assessments of mutually influencing factors on the DMP. The proposed model makes it possible to probabilistically predict the influence of factors and choose rational ways of their interaction in the DMP. \u0000Conclusions. The results of the experiments make it possible to recommend using the developed model, which takes into account the mutual influence of factors of various nature, including cognitive ones, in the DMP in order to improve the efficiency of HMSCA management as a whole.","PeriodicalId":518330,"journal":{"name":"Radio Electronics, Computer Science, Control","volume":"25 23","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140753629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
METHOD FOR DETERMINING THE BIT GRID OVERFLOW OF A COMPUTER SYSTEM OPERATING IN THE SYSTEM OF RESIDUAL CLASSES 确定在残差等级系统中运行的计算机系统的位格溢出的方法
Radio Electronics, Computer Science, Control Pub Date : 2024-04-02 DOI: 10.15588/1607-3274-2024-1-21
A. S. Yanko, V. Krasnobayev, S. B. Nikolsky, O. O. Kruk
{"title":"METHOD FOR DETERMINING THE BIT GRID OVERFLOW OF A COMPUTER SYSTEM OPERATING IN THE SYSTEM OF RESIDUAL CLASSES","authors":"A. S. Yanko, V. Krasnobayev, S. B. Nikolsky, O. O. Kruk","doi":"10.15588/1607-3274-2024-1-21","DOIUrl":"https://doi.org/10.15588/1607-3274-2024-1-21","url":null,"abstract":"Context. Consideration of a set of examples of practical application of the procedure for identifying overflow of the bit grid of a computer system operating in a non-positional number system in residual classes. The object of the study is the process of processing data represented in the residual class system. \u0000Objective. The goal of the work is to consider and analyze examples of the bit grid overflow definition of a computer system when implementing the operation of adding two numbers in a system of residual classes based on the application of a method for determining the bit grid overflow, based on the use of the concept of number rank. \u0000Method. The specificity of the functioning of a computer system in a system of residual classes requires the implementation of not only modular operations, but also requires the implementation of additional, so-called non-modular operations. Non-modular operations include the operation of determining the overflow of the bit grid of a computer system in the system of residual classes. In a non-positional number system in residual classes, implementing the process of detecting overflow of the bit grid of a computer system is a difficult task to implement. The method considered in the work for determining the overflow of the bit grid is based on the use of positional features of a non-positional code of numbers in the system of residual classes, namely the true and calculated ranks of a number. The process of determining the overflow of the result of the operation of adding two numbers in the system of residual classes has been studied, since this arithmetic operation is the main, basic operation performed by a computer system. \u0000Results. The developed methods are justified theoretically and studied when performing arithmetic modular operations of addition, subtraction and multiplication using tabular procedures. \u0000Conclusions. The main advantage of the presented method is that the process of determining the overflow of the bit grid can be carried out in the dynamics of the computing process of the computer system, i.e. without stopping the solution of the problem. This circumstance makes it possible to reduce the unproductive expenditure of the computer system in the system of residual classes. In addition, this method can be used to control the operation of adding two numbers in the residual class system. This increases the reliability of obtaining the true result of the operation of adding two numbers in the system of residual classes.","PeriodicalId":518330,"journal":{"name":"Radio Electronics, Computer Science, Control","volume":"243 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140751891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
THE DESIGN OF THE PIPELINED RISC-V PROCESSOR WITH THE HARDWARE COPROCESSOR OF DIGITAL SIGNAL PROCESSING 利用数字信号处理硬件协处理器设计流水线 RISC-V 处理器
Radio Electronics, Computer Science, Control Pub Date : 2024-04-02 DOI: 10.15588/1607-3274-2024-1-18
Y. Y. Vavruk, V. V. Makhrov, H. O. Hedeon
{"title":"THE DESIGN OF THE PIPELINED RISC-V PROCESSOR WITH THE HARDWARE COPROCESSOR OF DIGITAL SIGNAL PROCESSING","authors":"Y. Y. Vavruk, V. V. Makhrov, H. O. Hedeon","doi":"10.15588/1607-3274-2024-1-18","DOIUrl":"https://doi.org/10.15588/1607-3274-2024-1-18","url":null,"abstract":"Context. The digital signal processing is applied in many fields of science, technology and human activity. One of the ways of implementing algorithms of digital signal processing is the development of coprocessors as an integral part of well-known architectures. \u0000In the case of developing a pipelined device, the presented approach will allow to use software and hardware tools of the appropriate architecture, provide the faster execution of signal processing algorithms, reduce the number of cycles and memory accesses. \u0000Objective. Objectives are design and characterization study of a pipelined RISC-V processor and coprocessor of digital signal processing which performs fast Fourier transform. \u0000Method. Analyzing technical literature and existing decisions allow to assess advantages and disadvantages of modern developments and on the basis of which to form the relevance of the selected topic. Model designing and simulation results allow to examine a model efficiency, to determine weak components’ parts and to improve model parameters. \u0000Results. The pipelined RISC-V processor has been designed which executes a basic set of instructions. Execution time of assembly program on the single-cycled and the pipelined processors have been analyzed. According to the results, the test program on the pipelined processor is executed in 29 cycles, while on the single-cycle processor it takes 60 cycles. The structure of the coprocessor for the fast Fourier transform algorithm and a set of processor instructions that allow working with the coprocessor have been developed. The number of cycles of the coprocessor based on Radix-2 fast Fourier transform algorithm for 512 points is 2358 cycles, and for 1024 points is 5180 cycles. \u0000Conclusions. Conducted researches and calculations have showed that the application of the developed hardware coprocessor reduces the fast Fourier transform algorithm execution time and the load of the pipelined processor during calculations.","PeriodicalId":518330,"journal":{"name":"Radio Electronics, Computer Science, Control","volume":"45 28","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140751958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
METHOD OF GENERATIVE-ADVERSARIAL NETWORKS SEARCHING ARCHITECTURES FOR BIOMEDICAL IMAGES SYNTHESIS 用于生物医学图像合成的生成-对抗网络搜索架构方法
Radio Electronics, Computer Science, Control Pub Date : 2024-04-02 DOI: 10.15588/1607-3274-2024-1-10
O. Berezsky, P. B. Liashchynskyi
{"title":"METHOD OF GENERATIVE-ADVERSARIAL NETWORKS SEARCHING ARCHITECTURES FOR BIOMEDICAL IMAGES SYNTHESIS","authors":"O. Berezsky, P. B. Liashchynskyi","doi":"10.15588/1607-3274-2024-1-10","DOIUrl":"https://doi.org/10.15588/1607-3274-2024-1-10","url":null,"abstract":"Context. The article examines the problem of automatic design of architectures of generative-adversarial networks. Generativeadversarial networks are used for image synthesis. This is especially true for the synthesis of biomedical images – cytological and histological, which are used to make a diagnosis in oncology. The synthesized images are used to train convolutional neural networks. Convolutional neural networks are currently among the most accurate classifiers of biomedical images. \u0000Objective. The aim of the work is to develop an automatic method for searching for architectures of generative-adversarial networks based on a genetic algorithm. \u0000Method. The developed method consists of the stage of searching for the architecture of the generator with a fixed discriminator and the stage of searching for the architecture of the discriminator with the best generator. \u0000At the first stage, a fixed discriminator architecture is defined and a generator is searched for. Accordingly, after the first step, the architecture of the best generator is obtained, i.e. the model with the lowest FID value. \u0000At the second stage, the best generator architecture was used and a search for the discriminator architecture was carried out. At each cycle of the optimization algorithm, a population of discriminators is created. After the second step, the architecture of the generative-adversarial network is obtained. \u0000Results. Cytological images of breast cancer on the Zenodo platform were used to conduct the experiments. As a result of the study, an automatic method for searching for architectures of generatively adversarial networks has been developed. On the basis of computer experiments, the architecture of a generative adversarial network for the synthesis of cytological images was obtained. The total time of the experiment was ~39.5 GPU hours. As a result, 16,000 images were synthesized (4000 for each class). To assess the quality of synthesized images, the FID metric was used.The results of the experiments showed that the developed architecture is the best. The network’s FID value is 3.39. This result is the best compared to well-known generative adversarial networks. \u0000Conclusions. The article develops a method for searching for architectures of generative-adversarial networks for the problems of synthesis of biomedical images. In addition, a software module for the synthesis of biomedical images has been developed, which can be used to train CNN.","PeriodicalId":518330,"journal":{"name":"Radio Electronics, Computer Science, Control","volume":"118 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140752686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
APPROACH TO THE AUTOMATIC CREATION OF AN ANNOTATED DATASET FOR THE DETECTION, LOCALIZATION AND CLASSIFICATION OF BLOOD CELLS IN AN IMAGE 自动创建注释数据集的方法,用于图像中血细胞的检测、定位和分类
Radio Electronics, Computer Science, Control Pub Date : 2024-04-02 DOI: 10.15588/1607-3274-2024-1-12
S. M. Kovalenko, O. S. Kutsenko, S. V. Kovalenko, A. Kovalenko
{"title":"APPROACH TO THE AUTOMATIC CREATION OF AN ANNOTATED DATASET FOR THE DETECTION, LOCALIZATION AND CLASSIFICATION OF BLOOD CELLS IN AN IMAGE","authors":"S. M. Kovalenko, O. S. Kutsenko, S. V. Kovalenko, A. Kovalenko","doi":"10.15588/1607-3274-2024-1-12","DOIUrl":"https://doi.org/10.15588/1607-3274-2024-1-12","url":null,"abstract":"Context. The paper considers the problem of automating the creation of an annotated dataset for further use in a system for detecting, localizing and classifying blood cells in an image using deep learning. The subject of the research is the processes of digital image processing for object detection and localization. \u0000Objective. The aim of this study is to create a pipeline of digital image processing methods that can automatically generate an annotated set of blood smear images. This set will then be used to train and validate deep learning models, significantly reducing the time required by machine learning specialists. \u0000Method. The proposed approach for object detection and localization is based on digital image processing methods such as filtering, thresholding, binarization, contour detection, and filling. The pipeline for detection and localization includes the following steps: The given fragment of text describes a process that involves noise reduction, conversion to the HSV color model, defining a mask for white blood cells and platelets, detecting the contours of white blood cells and platelets, determining the coordinates of the upper left and lower right corners of white blood cells and platelets, calculating the area of the region inside the bounding box, saving the obtained data, and determining the most common color in the image; filling the contours of leukocytes and platelets with said color; defining a mask for red blood cells; defining the contours of red blood cells; determining the coordinates of the upper left and lower right corners of red blood cells; calculating the area of the region within the bounding box; entering data about the found objects into the dataframe; saving to a .csv file for future use. With an unlabeled image dataset and a generated .csv file using image processing libraries, any researcher should be able to recreate a labeled dataset. \u0000Results. The developed approach was implemented in software for creating an annotated dataset of blood smear images \u0000Conclusions. The study proposes and justifies an approach to automatically create a set of annotated data. The pipeline is tested on a set of unlabelled data and a set of labelled data is obtained, consisting of cell images and a .csv file with the attributes “file name”, “type”, “xmin”, “ymin”, “xmax”, “ymax”, “area”, which are the coordinates of the bounding box for each object. The number of correctly, incorrectly, and unrecognised objects is calculated manually, and metrics are calculated to assess the accuracy and quality of object detection and localisation.","PeriodicalId":518330,"journal":{"name":"Radio Electronics, Computer Science, Control","volume":"12 17","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140754450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A NONLINEAR REGRESSION MODEL FOR EARLY LOC ESTIMATION OF OPEN-SOURCE KOTLIN-BASED APPLICATIONS 基于 Kotlin 的开源应用程序早期定位估算的非线性回归模型
Radio Electronics, Computer Science, Control Pub Date : 2024-04-02 DOI: 10.15588/1607-3274-2024-1-8
S. Prykhodko, N. Prykhodko, A. V. Koltsov
{"title":"A NONLINEAR REGRESSION MODEL FOR EARLY LOC ESTIMATION OF OPEN-SOURCE KOTLIN-BASED APPLICATIONS","authors":"S. Prykhodko, N. Prykhodko, A. V. Koltsov","doi":"10.15588/1607-3274-2024-1-8","DOIUrl":"https://doi.org/10.15588/1607-3274-2024-1-8","url":null,"abstract":"Context. The early lines of code (LOC) estimation in software projects holds significant importance, as it directly influences the prediction of development effort, covering a spectrum of different programming languages, and open-source Kotlin-based applications in particular. The object of the study is the process of early LOC estimation of open-source Kotlin-based apps. The subject of the study is the nonlinear regression models for early LOC estimation of open-source Kotlin-based apps.\u0000Objective. The goal of the work is to build the nonlinear regression model with three predictors for early LOC estimation of open-source Kotlin-based apps based on the Box-Cox four-variate normalizing transformation to increase the confidence in early LOC estimation of these apps.\u0000Method. For early LOC estimation in open-source Kotlin-based apps, the model, confidence, and prediction intervals of nonlinear regression were constructed using the Box-Cox four-variate normalizing transformation and specialized techniques. These techniques, relying on multiple nonlinear regression analyses incorporating multivariate normalizing transformations, account for the dependencies between variables in non-Gaussian data scenarios. As a result, this method tends to reduce the mean magnitude of relative error (MMRE) and narrow confidence and prediction intervals compared to models utilizing univariate normalizing transformations.\u0000Results. An analysis has been carried out to compare the constructed model with nonlinear regression models employing decimal logarithm and Box-Cox univariate transformation.\u0000Conclusions. The nonlinear regression model with three predictors for early LOC estimation of open-source Kotlin-based apps is constructed using the Box-Cox four-variate transformation. Compared to the other nonlinear regression models, this model demonstrates a larger multiple coefficient of determination, a smaller value of the MMRE, and narrower confidence and prediction intervals. The prospects for further research may include the application of other data sets to construct the nonlinear regression model for early LOC estimation of open-source Kotlin-based apps for other restrictions on predictors.","PeriodicalId":518330,"journal":{"name":"Radio Electronics, Computer Science, Control","volume":"35 17","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140753169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IN-MEMORY INTELLIGENT COMPUTING 内存智能计算
Radio Electronics, Computer Science, Control Pub Date : 2024-04-02 DOI: 10.15588/1607-3274-2024-1-15
V. Hahanov, V. H. Abdullayev, S. V. Chumachenko, E. I. Lytvynova, I. V. Hahanova
{"title":"IN-MEMORY INTELLIGENT COMPUTING","authors":"V. Hahanov, V. H. Abdullayev, S. V. Chumachenko, E. I. Lytvynova, I. V. Hahanova","doi":"10.15588/1607-3274-2024-1-15","DOIUrl":"https://doi.org/10.15588/1607-3274-2024-1-15","url":null,"abstract":"Context. Processed big data has social significance for the development of society and industry. Intelligent processing of big data is a condition for creating a collective mind of a social group, company, state and the planet as a whole. At the same time, the economy of big data (Data Economy) takes first place in the evaluation of processing mechanisms, since two parameters are very important: speed of data processing and energy consumption. Therefore, mechanisms focused on parallel processing of large data within the data storage center will always be in demand on the IT market. \u0000Objective. The goal of the investigation is to increase the economy of big data (Data Economy) thanks to the analysis of data as truth table addresses for the identification of patterns of production functionalities based on the similarity-difference metric. \u0000Method. Intelligent computing architectures are proposed for managing cyber-social processes based on monitoring and analysis of big data. It is proposed to process big data as truth table addresses to solve the problems of identification, clustering, and classification of patterns of social and production processes. A family of automata is offered for the analysis of big data, such as addresses. The truth table is considered as a reasonable form of explicit data structures that have a useful constant – a standard address routing order. The goal of processing big data is to make it structured using a truth table for further identification before making actuator decisions. The truth table is considered as a mechanism for parallel structuring and packing of large data in its column to determine their similarity-difference and to equate data at the same addresses. Representation of data as addresses is associated with unitary encoding of patterns by binary vectors on the found universe of primitive data. The mechanism is focused on processorless data processing based on read-write transactions using in-memory computing technology with significant time and energy savings. The metric of truth table big data processing is parallelism, technological simplicity, and linear computational complexity. The price for such advantages is the exponential memory costs of storing explicit structured data. \u0000Results. Parallel algorithms of in-memory computing are proposed for economic mechanisms of transformation of large unstructured data, such as addresses, into useful structured data. An in-memory computing architecture with global feedback and an algorithm for matrix parallel processing of large data such as addresses are proposed. It includes a framework for matrix analysis of big data to determine the similarity between vectors that are input to the matrix sequencer. Vector data analysis is transformed into matrix computing for big data processing. The speed of the parallel algorithm for the analysis of big data on the MDV matrix of deductive vectors is linearly dependent on the number of bits of the input vectors or the power","PeriodicalId":518330,"journal":{"name":"Radio Electronics, Computer Science, Control","volume":"250 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140751161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PROACTIVE HORIZONTAL SCALING METHOD FOR KUBERNETES Kubernetes 的主动水平扩展方法
Radio Electronics, Computer Science, Control Pub Date : 2024-04-02 DOI: 10.15588/1607-3274-2024-1-20
O. Rolik, V. Omelchenko
{"title":"PROACTIVE HORIZONTAL SCALING METHOD FOR KUBERNETES","authors":"O. Rolik, V. Omelchenko","doi":"10.15588/1607-3274-2024-1-20","DOIUrl":"https://doi.org/10.15588/1607-3274-2024-1-20","url":null,"abstract":"Context. The problem of minimizing redundant resource reservation while maintaining QoS at an agreed level is crucial for modern information systems. Modern information systems can include a large number of applications, each of which uses computing resources and has its own unique features, which require a high level of automation to increase the efficiency of computing resource management processes. \u0000Objective. The purpose of this paper is to ensure the quality of IT services at an agreed level in the face of significant dynamics of user requests by developing and using a method of proactive automatic application scaling in Kubernetes. \u0000Method. This paper proposes a proactive horizontal scaling method based on the Prophet time series prediction algorithm. Prometheus metrics storage is used as a data source for training and validating forecasting models. Based on the historical metrics, a model is trained to predict the future utilization of computation resources using Prophet. The obtained time series is validated and used to calculate the required number of application replicas, considering deployment delays. \u0000Results. The experiments have shown the effectiveness of the proposed proactive automated application scaling method in comparison with existing solutions based on the reactive approach in the selected scenarios. This method made it possible to reduce the reservation of computing resources by 47% without loss of service quality compared to the configuration without scaling. \u0000Conclusions. A method for automating the horizontal scaling of applications in Kubernetes is proposed. Although the experiments have shown the effectiveness of this solution, this method can be significantly improved. In particular, it is necessary to consider the possibility of integrating a reactive component for atypical load patterns.","PeriodicalId":518330,"journal":{"name":"Radio Electronics, Computer Science, Control","volume":"70 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140752547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A RESEARCH OF THE LATEST APPROACHES TO VISUAL IMAGE RECOGNITION AND CLASSIFICATION 研究视觉图像识别和分类的最新方法
Radio Electronics, Computer Science, Control Pub Date : 2024-04-02 DOI: 10.15588/1607-3274-2024-1-13
V. P. Lysechko, B. I. Sadovnykov, O. M. Komar, О. S. Zhuchenko
{"title":"A RESEARCH OF THE LATEST APPROACHES TO VISUAL IMAGE RECOGNITION AND CLASSIFICATION","authors":"V. P. Lysechko, B. I. Sadovnykov, O. M. Komar, О. S. Zhuchenko","doi":"10.15588/1607-3274-2024-1-13","DOIUrl":"https://doi.org/10.15588/1607-3274-2024-1-13","url":null,"abstract":"Context. The paper provides an overview of current methods for recognizing and classifying visual images in static images or video stream. The paper will discuss various approaches, including machine learning, current problems of these methods and possible improvements. The biggest challenges of the visual image retrieval and classification task are discussed. The main emphasis is placed on the review of such promising algorithms as SSD, YOLO, R-CNN, an overview of the principles of these methods, network architectures. \u0000Objective. The aim of the work is to analyze existing studies and find the best algorithm for recognizing and classifying visual images for further activities. \u0000Method. Primary method is to compare different factors of algorithms in order to select the most perspective one. There are different marks to compare, like image processing speed, accuracy. There are a number of studies and publications that propose methods and algorithms for solving the problem of finding and classifying images in an image [3–6]. It should be noted that most promising approaches are based on machine learning methods. It is worth noting that the proposed methods have drawbacks due to the imperfect implementation of the Faster R-CNN, YOLO, SSD algorithms for working with streaming video. The impact of these drawbacks can be significantly reduced by applying the following solutions: development of combined identification methods, processing of edge cases – tracking the position of identified objects, using the difference between video frames, additional preliminary preparation of input data. Another major area for improvement is the optimization of methods to work with real-time video data, as most current methods focus on images. \u0000Results. As an outcome of the current research we have found an optimal algorithm for further researches and optimizations. \u0000Conclusions. Analysis of existent papers and researches has demonstrated the most promising algorithm for further optimizations and experiments. Also current approaches still have some space for further. The next step is to take the chosen algorithm and investigate possibilities to enhance it.","PeriodicalId":518330,"journal":{"name":"Radio Electronics, Computer Science, Control","volume":"23 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140754042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信