{"title":"Security basic model for applied tasks of the distributed information system","authors":"Y. Rodin, I. Sinitsyn","doi":"10.15407/pp2021.02.016","DOIUrl":"https://doi.org/10.15407/pp2021.02.016","url":null,"abstract":"The tasks of modelling and the components of the basic model of applied task protection of a distributed information system have been considered. The measurement and relationship of security parameters, protection, new and reference attacks, anomalies, and threat environments have been proposed. The conditions of threats, attacks and, consequently, inconsistencies in the results of applied tasks are proved. At the beginning of the article the concept of a distributed information system, system of applied tasks, modern trends of zero-trust architecture in building information security systems are discussed. Further, it gives an overview of existing methods of detection and counteraction to attacks based on reference knowledge bases. To improve the level of security it is proposed to analyze the causes of attacks, namely hazards and threats to the system. Attacks, hazards and threats are considered as structured processes that affect the internal and external environment of the system of the applied tasks with a further impact on the output of these tasks. The concepts of security level and security level of a distributed information system are introduced, as well as the concepts of applied task, environment, and user contradictions. As the logical metrics of discrepancy detection the apparatus of semantic analysis is proposed, which (based on the reference knowledge base, the apparatus of text transformations) should be applied at the stage of loading of applied task and describe the input and output data, requirements to the environment of the task solution. The result of the research is the proposed method for identifying additional data about hazards, threats, attacks, countermeasures to attacks, applied task-solving. This data is generated from the reference and augmented textual descriptions derived from the proposed contradictions. By building additional reference images of threats, attacks, countermeasures, it becomes possible to prevent the activation of new attacks on the distributed information system.","PeriodicalId":313885,"journal":{"name":"PROBLEMS IN PROGRAMMING","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125131375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Defining degree of semantic similarity using description logic tools","authors":"O. Zakharova","doi":"10.15407/pp2021.02.024","DOIUrl":"https://doi.org/10.15407/pp2021.02.024","url":null,"abstract":"Establishing the semantic similarity of information is an integral part of the process of solving any information retrieval tasks, including tasks related to big data processing, discovery of semantic web services, categorization and classification of information, etc. The special functions to determine quantitative indicators of degree of semantic similarity of the information allow ranking the found information on its semantic proximity to the purpose or search request/template. Forming such measures should take into account many aspects from the meanings of the matched concepts to the specifics of the business-task in which it is done. Usually, to construct such similarity functions, semantic approaches are combined with structural ones, which provide syntactic comparison of concepts descriptions. This allows to do descriptions of the concepts more detail, and the impact of syntactic matching can be significantly reduced by using more expressive descriptive logics to represent information and by moving the focus to semantic properties. Today, DL-ontologies are the most developed tools for representing semantics, and the mechanisms of reasoning of descriptive logics (DL) provide the possibility of logical inference. Most of the estimates presented in this paper are based on basic DLs that support only the intersection constructor, but the described approaches can be applied to any DL that provides basic reasoning services. This article contains the analysis of existing approaches, models and measures based on descriptive logics. Classification of the estimation methods both on the levels of defining similarity and the matching types is proposed. The main attention is paid to establishing the similarity between concepts (conceptual level models). The task of establishing the value of similarity between instances and between concept and instance consists of finding the most specific concept for the instance / instances and evaluating the similarity between the concepts. The term of existential similarity is introduced. In this paper the examples of applying certain types of measures to evaluate the degree of semantic similarity of notions and/or knowledge based on the geometry ontology is demonstrated.","PeriodicalId":313885,"journal":{"name":"PROBLEMS IN PROGRAMMING","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123842315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Application of neuro evolution tools in automation of technical control systems","authors":"A. Doroshenko, I. Achour, Ntuu Kpi","doi":"10.15407/pp2021.01.016","DOIUrl":"https://doi.org/10.15407/pp2021.01.016","url":null,"abstract":"Reinforced learning is a field of machine learning based on how software agents should perform actions in the environment to maximize the concept of cumulative reward. This paper proposes a new application of machine reinforcement learning techniques in the form of neuro-evolution of augmenting topologies to solve control automation problems using modeling control problems of technical systems. Key application components include OpenAI Gym toolkit for develop-ing and comparing reinforcement learn-ing algorithms, full-fledged open-source implementation of the NEAT genetic al-gorithm called SharpNEAT, and inter-mediate software for orchestration of these components. The algorithm of neu-roevolution of augmenting topologies demonstrates the finding of efficient neural networks on the example of a simple standard problem with continu-ous control from OpenAI Gym.","PeriodicalId":313885,"journal":{"name":"PROBLEMS IN PROGRAMMING","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130112846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Relations of logical consequence in logics of partial predicates with composition of predicate complement","authors":"O. Shkilniak","doi":"10.15407/pp2019.03.011","DOIUrl":"https://doi.org/10.15407/pp2019.03.011","url":null,"abstract":"In this paper we study software-oriented logics of partial predicates with new special non-monotonic operation (composition) of the predicate complement. We denote these logics by LC and composition of the predicate complement by . Such operations are used in various versions of the Floyd-Hoare program logic with partial pre- and post-conditions. We describe first order composition algebras and LC languages. For LC, a number of logical consequence relations ( Pc |= T , Pc |= F , Rc |= T , Rc |= F , Pc |= TF , Rc |= TF , P с |= IR ) and logical consequence relations under the conditions of undefinedness ( P |= T ^ , P |= F ^ , R |= T ^ , R |= F ^ , P |= TF ^ , R |= TF ^ ) are specified. Properties of the defined relations are investigated, differences and the relationship between them are given. For the introduced relations, we describe the conditions for their guaranteed presence, the decomposition conditions for formulas and the properties of quantifier elimination. The theorem of elimination of the conditions of undefinedness for the relations |= T ^ and |= F is proved. Thus, the relations P |= T ^ , P |= F ^ , R |= T ^ and R |= F ^ can be expressed by Pc |= T , Pc |= F , Rc |= T and Rc |= F respectively. However, it is shown that |= IR ^ cannot be expressed by P с |= IR . Moreover, it is impossible to define correctly the decomposition conditions for formulas for P с |= IR . Properties of decomposition conditions for formulas are different for the relations |= T and |= F , therefore properties of decomposition and equivalent transformations must be specified indirectly through the corresponding properties of |= T and |= F . First order sequent calculi for the introduced logical consequence relations for LC and logical consequence relations under the conditions of undefinedness will be constructed in in the forthcoming articles. Problems in programming 2019; 3: 11-27","PeriodicalId":313885,"journal":{"name":"PROBLEMS IN PROGRAMMING","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122362446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tasks and methods of Big Data analysis (a survey)","authors":"O. Balabanov","doi":"10.15407/PP2019.03.058","DOIUrl":"https://doi.org/10.15407/PP2019.03.058","url":null,"abstract":"We review tasks and methods most relevant to Big Data analysis. Emphasis is made on the conceptual and pragmatic issues of the tasks and methods (avoiding unnecessary mathematical details). We suggest that all scope of jobs with Big Data fall into four conceptual modes (types): four modes of large-scale usage of Big Data: 1) intelligent information retrieval; 2) massive (large-scale) conveyed data processing (mining); 3) model inference from data; 4) knowledge extraction from data (regularities detection and structures discovery). The essence of various tasks (clustering, regression, generative model inference, structures discovery etc.) are elucidated. We compare key methods of clustering, regression, classification, deep learning, generative model inference and causal discovery. Cluster analysis may be divided into methods based on mean distance, methods based on local distance and methods based on a model. The targeted (predictive) methods fall into two categories: methods which infer a model; \"tied to data\" methods which compute prediction directly from data. Common tasks of temporal data analysis are briefly overviewed. Among diverse methods of generative model inference we make focus on causal network learning because models of this class are very expressive, flexible and are able to predict effects of interventions under varying conditions. Independence-based approach to causal network inference from data is characterized. We give a few comments on specificity of task of dynamical causal network inference from timeseries. Challenges of Big Data analysis raised by data multidimensionality, heterogeneity and huge volume are presented. Some statistical issues related to the challenges are summarized. Problems in programming 2019; 3: 58-85","PeriodicalId":313885,"journal":{"name":"PROBLEMS IN PROGRAMMING","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131229187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Big data platforms. Main objectives, features and advantages","authors":"O. Zakharova","doi":"10.15407/pp2019.03.101","DOIUrl":"https://doi.org/10.15407/pp2019.03.101","url":null,"abstract":"This paper presents an overview of existing big data platforms. The goal is to identify the main problems and solutions that exist in this area, as well as the properties of the big data platforms that determine their capabilities, advantages or weaknesses in solving these problems. The relevance of the topic is due to the rapid evaluation of mobile devices and application systems, the corresponding increase in the volume of information and the inability of traditional systems to process such amounts of data in a reasonable time. That is, it is an information technology platform enterprise class that provides the properties and functionality of an application in one solution for developing, deploying, processing and managing big data. The goal of creating and using such platforms is to improve the scalability, availability, performance, and security of organizations working with big data. Big data platforms enable to process multi-structured data in real time and allow different users to use them for various tasks related to using big data. The paper discusses frameworks developed for solving big data problems, analyzes their characteristics, operating principles and capabilities in the context of the problems they are able to solve, it also identifies existing “gaps” and directions for further development. Solving the problems of big data, namely ensuring the effective storage, processing and analysis of data, will make information more useful, and companies that work with big data more competitive. Problems in programming 2019; 3: 101-115","PeriodicalId":313885,"journal":{"name":"PROBLEMS IN PROGRAMMING","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124975997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"First-order composition-nominative logics with predicates of weak equality and of strong equality","authors":"S. Shkilniak","doi":"10.15407/PP2019.03.028","DOIUrl":"https://doi.org/10.15407/PP2019.03.028","url":null,"abstract":"Development of the new software-oriented logical formalisms is a topical problem. The paper introduces logics of partial predicates with predicate complement and equality predicates, we denote them LCE. They extend logics of quasiary predicates with equality and logics with predicate complement. The composition of the predicate complement is used in Floyd-Hoare program logics’ extensions on the class of partial predicates. We define predicates of weak equality and of strong equality. Thus, LCE with predicates of weak equality (denoted by LCEw) and LCE with predicates of strong equality (denoted by LCEs) can be specified. LCE can be studied on the first order and renominative levels. We consider composition algebras of LCE, investigate properties of their compositions and describe first order languages of such logics. We concentrate on the properties related to the equality predicates and the composition of the predicate complement. Various variants of logical consequence relations for the first order LCE are introduced and studied: P |= T , P |= F , R |= T , R |= F , P |= TF , R |= TF , P |= IR . In particular, we obtained that LCEw are somewhat degenerate, as for them all the relations are incorrect except for the irrefutability logical consequence relation under the conditions of undefinedness |= IR ^ . At the same time, all of the listed relations are correct for LCEs. Properties of the logical consequence relations are the semantic basis for construction of the respective calculi of sequential type. Further investigation of logical consequence relations for LCE includes adding the conditions of undefinedness and constructing the corresponding sequent calculi; it is planned to be displayed in the forthcoming articles. Problems in programming 2019; 3: 28-44","PeriodicalId":313885,"journal":{"name":"PROBLEMS IN PROGRAMMING","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132945374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Basics of programming in the context of software engineering","authors":"М.О. Sydorov","doi":"10.15407/pp2019.03.045","DOIUrl":"https://doi.org/10.15407/pp2019.03.045","url":null,"abstract":"The article discusses the use of a constructive approach to building a program that is systematically cultivated in software engineering and made possible by a number of fundamental results obtained in programming theory. On the first, based on the structural theorem, reasonably was refused to use the go to operator and proposed a structured programming method, which provided a real way to create understandable programs. On the second, the concept of a subroutine, although it was used only to reduce routine work in the programming process, was the first means of modular presentation of programs. Later, the block and the subroutine formed the basis of block-oriented (procedural, subroutine) languages and the procedural (subroutine) programming method. In the third, to answer questions related to the definition of boundaries, sizes, and building module, the concept of connecting parts that make up a module and connections to indicate the connection between modules; specific modularization criteria; proposed a device module based on the concept of information hiding were introduced. The module is implemented in the programming language Modula, and later Modula-2. The concept of a module based on the modular (compositional) programming method. In the Simula 67 language, the foundations of object-oriented languages were introduced. They developed through the work on the concepts of inheritance, late binding and links, and were completed by the development of object-oriented languages and object-oriented (classification) programming. Thus, a framework for reuse, systematic reuse and component software development was created. Now these works are developing in the direction of research and creation of software as system of systems, using the link between system analysis and software engineering, and developing software system engineering. In the article, for learning the basics of programming, as a tool that allows clarifying the concept of program design, classification is used. As a classification feature - the level of encapsulation, which is based on the principles of software engineering – encapsulation and multi-level presentation -, is used. Applying the principle of encapsulation at different levels of representation program structure corresponding to different degrees of software abstraction, the concept of encapsulation level was obtained. Using this concept, you can find out the types of software constructions and the corresponding methods of programming (construction) programs. Using introduced the concepts and the constructive approach to building the program, the author created the didactics of the basics of programming, which was deployment through lectures for students of the specialty “Software Engineering” (121) and a textbook of the author for students and post-graduate students of the specialty. Problems in programming 2019; 3: 45-57","PeriodicalId":313885,"journal":{"name":"PROBLEMS IN PROGRAMMING","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130248608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The modification of the meta-language of normal forms of knowledge","authors":"A. F. Kurgaev","doi":"10.15407/PP2019.03.003","DOIUrl":"https://doi.org/10.15407/PP2019.03.003","url":null,"abstract":"The set of the relations of normal forms of knowledge (alternative, concatenation, negation and iteration) is expanded by the two new relations: commitment relation – non-zero number of repetitions of some structure, non-commitment relation of some structure and structure brackets. The introduction of new relations is implemented by describing their structures with the basic relations of the meta-language of normal forms of knowledge. The text and graphical descriptions of the meta-language of normal forms of knowledge are presented, extended by the stylistic relations of the information structure descriptions. The examples of graphical variants of representing the structures of new relations between terms, which are used to self-describe the modified meta-language of normal forms of knowledge are presented with the minimally required comments. Problems in programming 2019; 3: 03-10","PeriodicalId":313885,"journal":{"name":"PROBLEMS IN PROGRAMMING","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129095790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What is Big Data","authors":"V. Reznichenko","doi":"10.15407/PP2019.03.086","DOIUrl":"https://doi.org/10.15407/PP2019.03.086","url":null,"abstract":"The article attempts to uncover the essence of the concept of Big Data based on the analysis of materials from various sources. Defining characteristics of Big Data are given, their classification is given, a brief description of the origin and development is presented, the basic principles of operation are presented, methods and technologies for analysis and visualization are outlined, the life cycle of data management using Big Data technology is described. Problems in programming 2019; 3: 86-100","PeriodicalId":313885,"journal":{"name":"PROBLEMS IN PROGRAMMING","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132228505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}