{"title":"An Algorithm of Angular Superresolution Using the Cholesky Decomposition and Its Implementation Based on Parallel Computing Technology","authors":"S. E. Mishchenko, N. V. Shatskiy","doi":"10.3103/S014641162307009X","DOIUrl":"10.3103/S014641162307009X","url":null,"abstract":"<p>An algorithm of angular superresolution based on the Cholesky decomposition, which is a modification of the Capon algorithm, is proposed. It is shown that the proposed algorithm makes it possible to abandon the inversion of the covariance matrix of input signals. The proposed algorithm is compared with the Capon algorithm by the number of operations. It is established that the proposed algorithm, with a large dimension of the problem, provides some gain both when implemented on a single-threaded and multithreaded computer. Numerical estimates of the performance of the proposed and original algorithm using the Compute Unified Device Architecture (CUDA) NVidia parallel computing technology are obtained. It is established that the proposed algorithm saves GPU computing resources and is able to solve the problem of constructing a spatial spectrum when the dimensionality of the covariance matrix of input signals is almost doubled.</p>","PeriodicalId":46238,"journal":{"name":"AUTOMATIC CONTROL AND COMPUTER SCIENCES","volume":"57 7","pages":"661 - 671"},"PeriodicalIF":0.6,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140008837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recursive Sentiment Detection Algorithm for Russian Sentences","authors":"A. Y. Poletaev, I. V. Paramonov","doi":"10.3103/S0146411623070118","DOIUrl":"10.3103/S0146411623070118","url":null,"abstract":"<p>The article is devoted to the task of sentiment detection of Russian sentences. The sentiment is conceived as the author’s attitude to the topic of a sentence. This assay considers positive, neutral, and negative sentiment classes, i.e., the task of three-classes classification is solved. The article introduces a rule-based sentiment detection algorithm for Russian sentences. The algorithm is based on the assumption that the sentiment of a phrase can be determined by the sentiments of its parts by the recursive application of appropriate semantic rules to the sentiments of its parts organized as a constituency parse tree. The utilized set of semantic rules was constructed based on a discussion with experts in linguistics. The experiments showed that the proposed recursive algorithm performs slightly worse on the hotel reviews corpus than the adapted rule-based approach: weighted F1-measures are 0.75 and 0.78, respectively. To measure the algorithm efficiency on complex sentences, we created OpenSentimentCorpus based on OpenCorpora, an open corpus of sentences extracted from Russian news and periodicals. On OpenSentimentCorpus the recursive algorithm performs be.er than the adapted approach does: F1-measures are 0.70 and 0.63, respectively. This indicates that the proposed algorithm has an advantage in case of more complex sentences with more subtle ways of expressing the sentiment.</p>","PeriodicalId":46238,"journal":{"name":"AUTOMATIC CONTROL AND COMPUTER SCIENCES","volume":"57 7","pages":"740 - 749"},"PeriodicalIF":0.6,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140001684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Application of Majority Voting Functions to Estimate the Number of Monotone Self-Dual Boolean Functions","authors":"L. Y. Bystrov, E. V. Kuzmin","doi":"10.3103/S0146411623070027","DOIUrl":"10.3103/S0146411623070027","url":null,"abstract":"<p>One of the problems of modern discrete mathematics is Dedekind’s problem on the number of monotone Boolean functions. For other precomplete classes, general formulas for the number of functions of the classes had been found, but it has not been found so far for the class of monotone Boolean functions. Within the framework of this problem, there are problems of a lower level. One of them is the absence of a general formula for the number of Boolean functions of intersection <span>(MS)</span> of two classes—the class of monotone functions and the class of self-dual functions. In the paper, new lower bounds are proposed for estimating the cardinality of the intersection for both an even and an odd number of variables. It is shown that the majority voting function of an odd number of variables is monotone and self-dual. The majority voting function of an even number of variables is determined. Free voting functions, which are functions with fictitious variables similar in properties to majority voting functions, are introduced. Then the union of a set of majority voting functions and a set of free voting functions is considered, and the cardinality of this union is calculated. The resulting value of the cardinality is proposed as a lower bound for <span>(left| {MS} right|)</span>. For the class <span>(MS)</span> of monotone self-dual functions of an even number of variables, the lower bound is improved over the bounds proposed earlier, and for functions of an odd number of variables, the lower bound for <span>(left| {MS} right|)</span> is presented for the first time.</p>","PeriodicalId":46238,"journal":{"name":"AUTOMATIC CONTROL AND COMPUTER SCIENCES","volume":"57 7","pages":"706 - 717"},"PeriodicalIF":0.6,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140001687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Genre Classification of Russian Texts Based on Modern Embeddings and Rhythm","authors":"K. V. Lagutina","doi":"10.3103/S0146411623070076","DOIUrl":"10.3103/S0146411623070076","url":null,"abstract":"<p>This article investigates modern vector text models for solving the problem of genre classifying Russian-language texts. The models include ELMo embeddings, a pretrained BERT language model, and a set of numerical rhythmic characteristics based on lexico-grammatical tools. The experiments have been carried out on a corpus of 10 000 texts in five genres: novels, scientific articles, reviews, posts from the VKontakte social network, and news from OpenCorpora. Visualization and analysis of statistics for rhythmic characteristics have made it possible to distinguish both the most diverse genres in terms of rhythm (novels and reviews) and the least (scientific articles). It is these genres that are subsequently classified best using rhythm and the LSTM neural network classifier. Clustering and classifying texts by genre using the ELMo and BERT embeddings make it possible to separate one genre from another with a small number of errors. The multiclassification F-measure reaches 99%. This study confirms the effectiveness of modern embeddings in the tasks of computational linguistics and highlights the advantages and limitations of the set rhythmic characteristics on the genre classification material.</p>","PeriodicalId":46238,"journal":{"name":"AUTOMATIC CONTROL AND COMPUTER SCIENCES","volume":"57 7","pages":"817 - 827"},"PeriodicalIF":0.6,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140001688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Testing Dependencies and Inference Rules in Databases","authors":"S. V. Zykin","doi":"10.3103/S0146411623070179","DOIUrl":"10.3103/S0146411623070179","url":null,"abstract":"<p>The process of testing dependencies and inference rules can be used in two ways. First of all, testing allows verifying hypotheses about unknown inference rules. In this case, the main goal is to search for a counterexample relation that showcases the feasibility of the initial dependencies and contradicts the consequence. A found counterexample refutes the hypothesis, and the absence of a counterexample allows searching for a generalization of the rule and for conditions of its feasibility. Testing cannot be used to prove the feasibility of inference rules because generalization requires searching for universal inference conditions for each rule, which is impossible to program since even the form of these conditions is unknown. Secondly, when designing a particular database, it may be necessary to test the feasibility of a rule for which there is no theoretical justification. Such a situation can take place in the presence of anomalies in the superkey. This problem is solved by using join dependency of the inference rules. A complete system of rules (axioms) for these dependencies is yet to be found. This article discusses (1) a technique for testing inference rules through the example of join dependencies, (2) proposes a testing algorithm scheme, (3) considers some hypotheses for which there are no counterexamples or inference rules, and (4) proposes an example of testing used to search for the correct decomposition of a superkey.</p>","PeriodicalId":46238,"journal":{"name":"AUTOMATIC CONTROL AND COMPUTER SCIENCES","volume":"57 7","pages":"788 - 802"},"PeriodicalIF":0.6,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140888812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. I. Legalov, Y. G. Bugayenko, N. K. Chuykin, M. V. Shipitsin, Y. I. Riabtsev, A. N. Kamenskiy
{"title":"Transformation of C Programming Language Memory Model into Object-Oriented Representation of EO Language","authors":"A. I. Legalov, Y. G. Bugayenko, N. K. Chuykin, M. V. Shipitsin, Y. I. Riabtsev, A. N. Kamenskiy","doi":"10.3103/S0146411623070088","DOIUrl":"10.3103/S0146411623070088","url":null,"abstract":"<p>The paper analyzes the possibilities of transforming C programming language constructs into objects of EO programming language. The key challenge of the method is the transpilation from a system programming language into a language of a higher level of abstraction, which does not allow direct manipulations with computer memory. Almost all application and domain-oriented programming languages disable such direct access to memory. Operations that need to be supported in this case include the use of dereferenced pointers, the imposition of data of different types in the same memory area, and different interpretation of the same data which is located in the same memory address space. A decision was made to create additional EO-objects that directly simulate the interaction with computer memory as in C language. These objects encapsulate unreliable data operations which use pointers. An abstract memory object was proposed for simulating the capabilities of C language to provide interaction with computer memory. The memory object is essentially an array of bytes. It is possible to write into memory and read from memory at a given index. The number of bytes read or written depends on which object is being used. The transformation of various C language constructs into EO code is considered at the level of the compilation unit. To study the variants and analyze the results a transpiler was developed that provides necessary transformations. It is implemented on the basis of Clang, which forms an abstract syntax tree. This tree is processed using LibTooling and LibASTMatchers libraries. As a result of compiling a C program, code in EO language is generated. The considered approach turns out to be appropriate for solving different problems. One of such problems is static code analysis. Such solutions make it possible to isolate low-level code fragments into separate program objects, focusing on their study and possible transformations into more reliable code.</p>","PeriodicalId":46238,"journal":{"name":"AUTOMATIC CONTROL AND COMPUTER SCIENCES","volume":"57 7","pages":"803 - 816"},"PeriodicalIF":0.6,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140001807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On Some Estimate for the Norm of an Interpolation Projector","authors":"Mikhail Nevskii","doi":"10.3103/S0146411623070106","DOIUrl":"10.3103/S0146411623070106","url":null,"abstract":"<p>Let <span>({{Q}_{n}}{{ = [0,1]}^{n}})</span> be the unit cube in <span>({{mathbb{R}}^{n}})</span> and let <span>(C({{Q}_{n}}))</span> be the space of continuous functions <span>(f:{{Q}_{n}} to mathbb{R})</span> with the norm <span>({{left| {left| f right|} right|}_{{C({{Q}_{n}})}}}: = mathop {max }nolimits_{x in {{Q}_{n}}} left| {f(x)} right|.)</span> By <span>({{Pi }_{1}}left( {{{mathbb{R}}^{n}}} right))</span> denote the set of polynomials of degree <span>( leqslant 1)</span>, i. e., the set of linear functions on <span>({{mathbb{R}}^{n}})</span>. The interpolation projector <span>(P:C({{Q}_{n}}) to {{Pi }_{1}}({{mathbb{R}}^{n}}))</span> with the nodes <span>({{x}^{{(j)}}} in {{Q}_{n}})</span> is defined by the equalities <span>(Pfleft( {{{x}^{{(j)}}}} right) = fleft( {{{x}^{{(j)}}}} right))</span>, <span>(j = 1,)</span> <span>( ldots ,)</span> <span>(n + 1)</span>. Let <span>({{left| {left| P right|} right|}_{{{{Q}_{n}}}}})</span> be the norm of <span>(P)</span> as an operator from <span>(C({{Q}_{n}}))</span> to <span>(C({{Q}_{n}}))</span>. If <span>(n + 1)</span> is an Hadamard number, then there exists a nondegenerate regular simplex having the vertices at vertices of <span>({{Q}_{n}})</span>. We discuss some approaches to get inequalities of the form <span>({{left| {left| P right|} right|}_{{{{Q}_{n}}}}} leqslant csqrt n )</span> for the norm of the corresponding projector <span>(P)</span>.</p>","PeriodicalId":46238,"journal":{"name":"AUTOMATIC CONTROL AND COMPUTER SCIENCES","volume":"57 7","pages":"718 - 726"},"PeriodicalIF":0.6,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140001447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Two-Step Coloring of Grid Graphs of Different Types","authors":"A. V. Smirnov","doi":"10.3103/S0146411623070131","DOIUrl":"10.3103/S0146411623070131","url":null,"abstract":"<p>In this article, we consider the NP-hard problem of the two-step coloring of a graph. It is required to color the graph in the given number of colors in a way, when no pair of vertices has the same color, if these vertices are at a distance of one or two between each other. The optimum two-step coloring is one that uses the minimum possible number of colors. The two-step coloring problem is studied in application to grid graphs. We consider four types of grids: triangular, square, hexagonal, and octagonal. We show that the optimum two-step coloring of hexagonal and octagonal grid graphs requires four colors in the general case. We formulate the polynomial algorithms for such a coloring. A square grid graph with the maximum vertex degree equal to 3 requires four or five colors for a two-step coloring. In this paper, we propose the backtracking algorithm for this case. Also, we present the algorithm, which works in linear time relative to the number of vertices, for the two-step coloring in seven colors of a triangular grid graph and show that this coloring is always correct. If the maximum vertex degree equals six, the solution is optimum.</p>","PeriodicalId":46238,"journal":{"name":"AUTOMATIC CONTROL AND COMPUTER SCIENCES","volume":"57 7","pages":"760 - 771"},"PeriodicalIF":0.6,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140001612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On Building Self-Complementary Codes and Their Application in Information Hiding","authors":"Y. V. Kosolapov, F. S. Pevnev, M. V. Yagubyants","doi":"10.3103/S0146411623070040","DOIUrl":"10.3103/S0146411623070040","url":null,"abstract":"<p>Line codes are widely used to protect data transmission and storage systems against errors, keep various cryptographic algorithms and protocols working stably, and protect hidden information from errors in a stegocontainer. One of the code classes applied in a number of the listed areas is linear self-complementary codes over a binary field. These codes contain a vector of all ones and their weight numerator is a symmetric polynomial. In applied problems, self-complementary [<i>n</i>, <i>k</i>] codes are often required to have the maximal possible code distance <i>d</i>(<i>k</i>, <i>n</i>) at given length <i>n</i> and size <i>k</i>. The values of <i>d</i>(<i>k</i>, <i>n</i>) are already known for <i>n</i> < 13. The task formulated in this paper for self-complementary codes with length <i>n</i> = 13, 14, 15 is to find lower estimates of <i>d</i>(<i>k</i>, <i>n</i>) and values proper of <i>d</i>(<i>k</i>, <i>n</i>)<i>.</i> The development of an efficient method for obtaining a lower estimate close to <i>d</i>(<i>k</i>, <i>n</i>) is an urgent task, because finding values proper of <i>d</i>(<i>k</i>, <i>n</i>) is generally a difficult task. The paper proposes four methods for finding lower estimates. These methods are based on cyclic codes, residual codes, the (<i>u|u + v</i>) structure, and the tensor product of codes. The methods are used together for the considered lengths to efficiently obtain lower estimates which either coincide with found values of <i>d</i>(<i>k</i>, <i>n</i>) or differ from them by one. The paper proposes a sequence of checks, which in some cases helps prove the absence of a self-complementary [<i>n</i>, <i>k</i>] code with code distance <i>d</i>. The final part of the work proposes an information hiding structure based on self-complementary codes. This structure is resistant to interference in the stegocontainer. The calculations show that the new structure is more efficient when compared with the known counterparts.</p>","PeriodicalId":46238,"journal":{"name":"AUTOMATIC CONTROL AND COMPUTER SCIENCES","volume":"57 7","pages":"772 - 787"},"PeriodicalIF":0.6,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142414307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. N. Ryzhenko, O. V. Nepomnyaschy, A. I. Legalov, V. V. Shaidurov
{"title":"Methods for Changing Parallelism in the Process of High-Level VLSI Synthesis","authors":"I. N. Ryzhenko, O. V. Nepomnyaschy, A. I. Legalov, V. V. Shaidurov","doi":"10.3103/S014641162307012X","DOIUrl":"10.3103/S014641162307012X","url":null,"abstract":"<p>In this paper, methods for increasing the efficiency of VLSI development based on the method of architecture-independent design are proposed. The route of high-level VLSI synthesis is considered. The principle of constructing a VLSI hardware model based on the functional-flow programming paradigm is stated. The results of the development of methods and algorithms for the transformation of functional-parallel programs into programs in HDL languages that support the design process of digital chips are presented. The principles of assessment are considered and the classes of resources required for the analysis of design solutions are identified. Reduction coefficients and methods of their calculation for each resource class are introduced. An algorithm for calculating the reduction coefficients and estimating the required resources is proposed. An algorithm for converting parallelism is proposed, taking into account the specified constraints of the target platform. A mechanism for the exchange of metrics with an architecture-dependent level is developed. Examples of the reduction of parallelism for the FPGA platform and practical implementation of FFT algorithms in the Virtex® UltraScale FPGA basis are given. The developed methods and algorithms make it possible to use the method of architecture-independent synthesis for transferring VLSI projects to various architectures by changing the parallelism of the circuit and equivalent transformations of parallel programs. The proposed approach provides many options for hardware solutions for implementation on various target platforms.</p>","PeriodicalId":46238,"journal":{"name":"AUTOMATIC CONTROL AND COMPUTER SCIENCES","volume":"57 7","pages":"696 - 705"},"PeriodicalIF":0.6,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140001685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}