{"title":"METHOD FOR GENERATING A DATA SET FOR TRAINING A NEURAL NETWORK IN A TRANSPORT CONVEYOR MODEL","authors":"O. Pihnastyi, G. Kozhevnikov, Anna Burduk","doi":"10.20998/2522-9052.2024.2.09","DOIUrl":"https://doi.org/10.20998/2522-9052.2024.2.09","url":null,"abstract":"The object of research is a stochastic input flow of material coming in the input of a conveyor-type transport system. Subject of research is the development of a method for generating values of the stochastic input material flow of transport conveyor to form a training data set for neural network models of the transport conveyor. The goal of the research is to develop a method for generating random values to construct implementations of the input material flow of a transport conveyor that have specified statistical characteristics calculated based on the results of previously performed experimental measurements. The article proposes a method for generating a data set for training a neural network for a model of a branched, extended transport conveyor. A method has been developed for constructing implementations of the stochastic input flow of material of a transport conveyor. Dimensionless parameters are introduced to determine similarity criteria for input material flows. The stochastic input material flow is presented as a series expansion in coordinate functions. To form statistical characteristics, a material flow implementation based on the results of experimental measurements is used. As a zero approximation for expansion coefficients, that are random variables, the normal distribution law of a random variable is used. Conclusion. It is shown that with an increase in the time interval for the implementation of the input material flow, the correlation function of the generated implementation steadily tends to the theoretically determined correlation function. The length of the time interval for the generated implementation of the input material flow was estimated.","PeriodicalId":275587,"journal":{"name":"Advanced Information Systems","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141387604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Viktoriia Biliaieva, L. Levchenko, Iryna Myshchenko, O. Tykhenko, Vitalii Kozachyna
{"title":"MODELING THE DISTRIBUTION OF EMERGENCY RELEASE PRODUCTS AT A NUCLEAR POWER PLANT UNIT","authors":"Viktoriia Biliaieva, L. Levchenko, Iryna Myshchenko, O. Tykhenko, Vitalii Kozachyna","doi":"10.20998/2522-9052.2024.2.03","DOIUrl":"https://doi.org/10.20998/2522-9052.2024.2.03","url":null,"abstract":"Despite the fact that much attention is paid to the safe operation of nuclear power plants, there is a possibility of an accident with the release of radionuclides. This is especially true in Ukraine, where there is a threat of the damage to nuclear reactors as a result of military operations. It is impossible to research the distribution of products emergency releases radioactive substances in laboratory conditions. Therefore, the only tool for the development predicting of an accident is the modeling the spread of a radionuclides cloud. The purpose of the research is a modeling the distribution of emergency release products in a nuclear power plant unit, suitable for the operative assessment of a development an accident. Results of the research: The mathematical model of the distribution emission products of a nuclear power plant has been developed, which takes into account the value of the initial activity of emission products, the rate of the settling radioactive particles, the wind speed components, the intensity changes radionuclide emission over time. The technique for solving the boundary value problem of modeling in conditions of a complex shape of the computational domain, taking into account the presence of obstacles to the spread of emission products has been developed. The use of the velocity potential equation in evolutionary form allows us to speed up the calculation process. The chosen splitting scheme of an alternating-triangular method allows to find the speed potential according to the explicit form at each splitting step. This allowed software implementation of the CFD model. The visualized models of the emission cloud distribution allow to determine the radiation situation in any place of the emission product distribution zone. The developed model makes it possible to quickly predict the development of an accident in space and time, which makes it possible to take measures to protect people from exposure in the shortest possible time. Conclusions: The obtained emission cloud propagation models and their visualization make it possible to determine the state of environmental pollution under various initial conditions during the development of the accident.","PeriodicalId":275587,"journal":{"name":"Advanced Information Systems","volume":"39 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141387562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MEDOIDS AS A PACKING OF ORB IMAGE DESCRIPTORS","authors":"O. Gorokhovatskyi, Olena Yakovleva","doi":"10.20998/2522-9052.2024.2.01","DOIUrl":"https://doi.org/10.20998/2522-9052.2024.2.01","url":null,"abstract":"The aim of the research. The paper presents the research about the feasibility to use matching medoids obtained from the set of ORB descriptors instead matching the full set of binary descriptors for image classification problem. Research results. Different methods that include direct brute force medoids matching, grouping of medoids for separate classes, and grouping of descriptors followed by calculation of medoids amongst them were proposed. Numerical experiments were performed for all these methods in order to compare the classification accuracy and inference time. It has been shown that using of medoids allowed us to redistribute processing time in order to perform more calculations during preprocessing rather than during classification. According to modelling performed on the Leeds Butterly dataset matching images based on medoids could have the same accuracy as matching of descriptors (0.69–0.88 for different number of features). Medoids require additional time for the calculation during preprocessing stage but classification time becomes faster: in our experiments we have obtained about 9–10 times faster classification and same 9–10 times increasing preprocessing time for the models that have comparable accuracies. Finally, the efficiency of the proposed ideas was compared to the CNN trained and evaluated on the same data. As expected, CNN required much more preprocessing (training) time but the result is worth it: this approach provides the best classification accuracy and inference time. Conclusion. Medoids matching could have the same accuracy as direct descriptors matching, but the usage of medoids allows us to redistribute the overall modeling time with the increasing preprocessing time and making inference faster.","PeriodicalId":275587,"journal":{"name":"Advanced Information Systems","volume":"237 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141386763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DEEPFAKE DETECTION USING TRANSFER LEARNING-BASED XCEPTION MODEL","authors":"Velusamy Rajakumareswaran, Surendran Raguvaran, Venkatachalam Chandrasekar, Sugavanam Rajkumar, Vijayakumar Arun","doi":"10.20998/2522-9052.2024.2.10","DOIUrl":"https://doi.org/10.20998/2522-9052.2024.2.10","url":null,"abstract":"Justification of the purpose of the research. In recent times, several approaches for face manipulation in videos have been extensively applied and availed to the public which makes editing faces in video easy for everyone effortlessly with realistic efforts. While beneficial in various domains, these methods could significantly harm society if employed to spread misinformation. So, it is also vital to properly detect whether a face has been distorted in a video series. To detect this deepfake, convolutional neural networks can be used in past works. However, it needs a greater number of parameters and more computations. So, to overcome these limitations and to accurately detect deepfakes in videos, a transfer learning-based model named the Improved Xception model is suggested. Obtained results. This model is trained using extracted facial landmark features with robust training. Moreover, the improved Xception model's detection accuracy is evaluated alongside ResNet and Inception, considering model loss, accuracy, ROC, training time, and the Precision-Recall curve. The outcomes confirm the success of the proposed model, which employs transfer learning techniques to identify fraudulent videos. Furthermore, the method demonstrates a noteworthy 5% increase in efficiency compared to current systems.","PeriodicalId":275587,"journal":{"name":"Advanced Information Systems","volume":"4 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141387849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Ivashko, Andrey Zuev, Dmytro Karaman, Miha Moškon
{"title":"FPGA-BASED IMPLEMENTATION OF A GAUSSIAN SMOOTHING FILTER WITH POWERS-OF-TWO COEFFICIENTS","authors":"A. Ivashko, Andrey Zuev, Dmytro Karaman, Miha Moškon","doi":"10.20998/2522-9052.2024.2.05","DOIUrl":"https://doi.org/10.20998/2522-9052.2024.2.05","url":null,"abstract":"The purpose of the study is to develop methods for synthesizing a Gaussian filter that ensures simplified hardware and software implementation, particularly filters with powers-of-two coefficients. Such filters can provide effective denoising of images, including landscape maps, both natural and synthetically generated. The study also involves analyzing of methods for FPGA implementation, comparing their hardware complexity, performance, and noise reduction with traditional Gaussian filters. Results. An algorithm for rounding filter coefficients to powers of two, providing optimal approximation of the constructed filter to the original, is presented, along with examples of developed filters. Topics covered include FPGA implementation, based on the Xilinx Artix-7 FPGA. Filter structures, testing methods, simulation results, and verification of the scheme are discussed. Examples of the technological placement of the implemented scheme on the FPGA chip are provided. Comparative evaluations of FPGA resources and performance for proposed and traditional Gaussian filters are carried out. Digital modeling of the filters and noise reduction estimates for noisy images of the terrain surface are presented. The developed algorithm provides approximation of Gaussian filter coefficients as powers of two for a given window size and maximum number of bits with a relative error of no more than 0.18. Implementing the proposed filters on FPGA results in a hardware costs reduction with comparable performance. Computer simulation show that Gaussian filters both traditional and proposed effectively suppress additive white noise in images. Proposed filters improve the signal-to-noise ratio within 5-10 dB and practically match the filtering quality of traditional Gaussian filters.","PeriodicalId":275587,"journal":{"name":"Advanced Information Systems","volume":"130 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141387384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"RESEARCH AND ANALYSIS OF EFFICIENCY INDICATORS OF CRITICAL INFRASTRUCTURES IN THE COMMUNICATION SYSTEM","authors":"Bayram Ibrahimov, A. Hasanov, E. Hashimov","doi":"10.20998/2522-9052.2024.2.07","DOIUrl":"https://doi.org/10.20998/2522-9052.2024.2.07","url":null,"abstract":"The efficiency indicators of the functioning critical information infrastructures in the communication system are analyzed based on the architectural concept of future networks. The object of the study is hardware and software complexes critical information infrastructures for special purposes. Critical information infrastructure represents information and telecommunication communication systems, the maintenance, reliability and security which are necessary for the safe operation special-purpose enterprises. In order to avoid the occurrence of various security and reliability incidents, the studied critical infrastructures communication systems require constant analysis and updating operating rules. The subject of the research is a method for calculating quality indicators of the functioning of critical information infrastructures in communication systems. In this work, using the example of a communication system based on modern technologies, the sequence of actions for analyzing threats to the security of a critical information infrastructure facility is considered. The purpose of the study is to develop a new approach for creating methods for calculating indicators of efficiency, reliability and information security systems. Based on the analysis of the work, a method for calculating efficiency indicators critical information infrastructures of communication systems is proposed and important analytical expressions for further research are obtained. As a result of the study, the main conclusions of the study were obtained, which can be implemented and used in critical infrastructures of communication systems to calculate the quality of functioning public computer and telecommunication systems.","PeriodicalId":275587,"journal":{"name":"Advanced Information Systems","volume":"154 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141387426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artem Hurin, H. Khudov, Oleksandr Kostyria, Oleh Maslenko, Serhii Siadrystyi
{"title":"COMPARATIVE ANALYSIS OF SPECTRAL ANOMALIES DETECTION METHODS ON IMAGES FROM ON-BOARD REMOTE SENSING SYSTEMS","authors":"Artem Hurin, H. Khudov, Oleksandr Kostyria, Oleh Maslenko, Serhii Siadrystyi","doi":"10.20998/2522-9052.2024.2.06","DOIUrl":"https://doi.org/10.20998/2522-9052.2024.2.06","url":null,"abstract":"The subject matter of the article is methods of detecting spectral anomalies on images from remote sensing systems. The goal is to conduct a comparative analysis of methods for detecting spectral anomalies on images from remote sensing systems. The tasks are: analysis of the main methods of detecting spectral anomalies on images from remote sensing systems; processing of images from remote sensing systems using basic methods of detecting spectral anomalies; comparative assessment of the quality of methods for detecting spectral anomalies on images from remote monitoring systems. The methods used are: methods of digital image processing, mathematical apparatus of matrix theory, methods of mathematical modeling, methods of optimization theory, analytical and empirical methods of image comparison. The following results are obtained. The main methods of detecting spectral anomalies on images from remote sensing systems were analyzed. Processing of images from remote sensing systems using the basic methods of detecting spectral anomalies was carried out. A comparative assessment of the quality of methods for detecting spectral anomalies on images from remote monitoring systems was carried out. Conclusions. The spectral difference of the considered methods is revealed by the value of information indicators - Euclidean distance, Mahalanobis distance, brightness contrast, and Kullback-Leibler information divergence. Mathematical modeling of the considered methods of detecting spectral anomalies of images with a relatively “simple” and complicated background was carried out. It was established that when searching for a spectral anomaly on an image with a complicated background, the method based on the Kullback-Leibler divergence can be more effective than the other considered methods, but is not optimal. When determining several areas of the image with high divergence indicators, they should be additionally investigated using the specified methods in order to more accurately determine the position of the spectral anomaly.","PeriodicalId":275587,"journal":{"name":"Advanced Information Systems","volume":"107 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141387321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oleg Barabash, Olha Svynchuk, I. Salanda, Viktor Mashkov, M. Myroniuk
{"title":"ENSURING THE FUNCTIONAL STABILITY OF THE INFORMATION SYSTEM OF THE POWER PLANT ON THE BASIS OF MONITORING THE PARAMETERS OF THE WORKING CONDITION OF COMPUTER DEVICES","authors":"Oleg Barabash, Olha Svynchuk, I. Salanda, Viktor Mashkov, M. Myroniuk","doi":"10.20998/2522-9052.2024.2.12","DOIUrl":"https://doi.org/10.20998/2522-9052.2024.2.12","url":null,"abstract":"The functional stability of the information system of the power plant is ensured by a complex of processes and mechanisms that are capable of maintaining the normal operation of the system even in the event of errors, failures or negative impacts. The aim of the research. An important aspect of ensuring the functional stability of an information system is the monitoring of its healthy state, as it helps to identify, analyze and respond to any problems in a timely manner, ensuring the reliable and uninterrupted operation of the system. It was decided to choose a test diagnosis based on the principle of a wandering diagnostic core. Research results. An algorithm for detecting failures in the system has been developed based on the decryption of the totality of the results of the system's test checks. The developed software application allows you to monitor the state of various components of the information system and detect possible problems or failures in a timely manner in order to support the continuous operation of the system. This application allows you to increase the reliability of diagnostics, reduce the time of diagnostics, and carry out diagnostics with the specified completeness and depth. The depth and completeness of diagnosis is determined by the test task. Verification. To confirm the correctness of the developed software product, mathematical modeling of the process of diagnosing the information system, which was divided into several subsystems containing a certain number of modules, was carried out. For the division into subsystems, the number of modules in each subsystem is important - it should not exceed 30 modules. This limitation is due to the limited computing power of modern microprocessor technology during the solution of a class of NP-complete problems.","PeriodicalId":275587,"journal":{"name":"Advanced Information Systems","volume":"225 17","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141387030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Valerii Filatov, Anna Filatova, Anatolii Povoroznyuk, Shakhin Omarov
{"title":"IMAGE CLASSIFIER FOR FAST SEARCH IN LARGE DATABASES","authors":"Valerii Filatov, Anna Filatova, Anatolii Povoroznyuk, Shakhin Omarov","doi":"10.20998/2522-9052.2024.2.02","DOIUrl":"https://doi.org/10.20998/2522-9052.2024.2.02","url":null,"abstract":"Relevance. The avalanche-like growth in the amount of information on the Internet necessitates the development of effective methods for quickly processing such information in information systems. Clustering of news information is carried out by taking into account both the morphological analysis of texts and graphic content. Thus, an urgent task is the clustering of images accompanying textual information on various web resources, including news portals. The subject of study is an image classifier that exhibits low sensitivity to increased information in databases. The purpose of the article is to enhance the efficiency of searching for identical images in databases experiencing a daily influx of 10-12 thousand images, by developing an image classifier. Methods used: mathematical modeling, content-based image retrieval, two-dimensional discrete cosine transform, image processing methods, decision-making methods. The following results were obtained. An image classifier has been developed with low sensitivity to increased database information. The properties of the developed classifier have been analyzed. The experiments demonstrated that clustering information based on images using the developed classifier proved to be sufficiently fast and cost-effective in terms of information volumes and computational power requirements.","PeriodicalId":275587,"journal":{"name":"Advanced Information Systems","volume":"54 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141387655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vladimir Pevnev, Oles Yudin, Peter Sedlaček, Nina Kuchuk
{"title":"METHOD OF TESTING LARGE NUMBERS FOR PRIMALITY","authors":"Vladimir Pevnev, Oles Yudin, Peter Sedlaček, Nina Kuchuk","doi":"10.20998/2522-9052.2024.2.11","DOIUrl":"https://doi.org/10.20998/2522-9052.2024.2.11","url":null,"abstract":"The current stage of scientific and technological development entails ensuring information security across all domains of human activity. Confidential data and wireless channels of remote control systems are particularly sensitive to various types of attacks. In these cases, various encryption systems are most commonly used for information protection, among which large prime numbers are widely utilized. The subject of research involves methods for generating prime numbers, which entail selecting candidates for primality and determining the primality of numbers. The subject of research involves methods for generating prime numbers, which choice selecting candidates for primality and determining the primality of numbers. The objective of the work is the development and theoretical justification of a method for determining the primality of numbers and providing the results of its testing. The aim to address the following main tasks: analyze the most commonly used and latest algorithms, methods, approaches, and tools for primality testing among large numbers; propose and theoretically justify a method for determining primality for large numbers; and conduct its testing. To achieve this aim, general scientific methods have been applied, including analysis of the subject area and mathematical apparatus, utilization of set theory, number theory, fields theory, as well as experimental design for organizing and conducting experimental research. The following results have been obtained: modern methods for selecting candidates for primality testing of large numbers have been analyzed, options for generating large prime numbers have been considered, and the main shortcomings of these methods for practical application of constructed prime numbers have been identified. Methods for determining candidates for primality testing of large numbers and a three-stage method for testing numbers for primality have been proposed and theoretically justified. The testing conducted on the proposed primality determination method has demonstrated the correctness of the theoretical conclusions regarding the feasibility of applying the proposed method to solve the stated problem. Conclusions. The use of a candidate primality testing strategy allows for a significant reduction in the number of tested numbers. For numbers of size 200 digits, the tested numbers is reduced to 8.82%. As the size of the tested numbers increases, their quantity will decrease. The proposed method for primality testing is sufficiently simple and effective. The first two stages allow for filtering out all composite numbers except for Carmichael numbers. In the first stage, using the first ten prime numbers filters out over 80 percent of the tested numbers. In the second stage, composite numbers with factors greater than 29 are sieved out. In the third stage, Carmichael numbers are sieved out. The test is polynomial, deterministic, and unconditional.","PeriodicalId":275587,"journal":{"name":"Advanced Information Systems","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141388096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}