{"title":"Top-down complementation of automata on finite trees","authors":"Laurent Doyen","doi":"10.1016/j.ipl.2024.106499","DOIUrl":"https://doi.org/10.1016/j.ipl.2024.106499","url":null,"abstract":"<div><p>We present a new complementation construction for nondeterministic automata on finite trees. The traditional complementation involves determinization of the corresponding bottom-up automaton (recall that top-down deterministic automata are less powerful than nondeterministic automata, whereas bottom-up deterministic automata are equally powerful).</p><p>The construction works directly in a top-down fashion, therefore without determinization. The main advantages of this construction are: (<em>i</em>) in the special case of finite words it boils down to the standard subset construction (which is not the case of the traditional bottom-up complementation construction), and <span><math><mo>(</mo><mi>i</mi><mi>i</mi><mo>)</mo></math></span> it illustrates the core argument of the complementation lemma for infinite trees, central in the proof of Rabin's tree theorem, in a simpler setting where issues related to acceptance conditions over infinite words and determinacy of infinite games are not present.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"187 ","pages":"Article 106499"},"PeriodicalIF":0.5,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0020019024000292/pdfft?md5=c514386574ffd52c0851a3a48a6c2db9&pid=1-s2.0-S0020019024000292-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140825394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dispersion problem on a convex polygon","authors":"Pawan K. Mishra , S.V. Rao , Gautam K. Das","doi":"10.1016/j.ipl.2024.106498","DOIUrl":"https://doi.org/10.1016/j.ipl.2024.106498","url":null,"abstract":"<div><p>Given a set <span><math><mi>P</mi><mo>=</mo><mo>{</mo><msub><mrow><mi>p</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>,</mo><msub><mrow><mi>p</mi></mrow><mrow><mn>2</mn></mrow></msub><mo>,</mo><mo>…</mo><mo>,</mo><msub><mrow><mi>p</mi></mrow><mrow><mi>n</mi></mrow></msub><mo>}</mo></math></span> of <em>n</em> points in <span><math><msup><mrow><mi>R</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span> and a positive integer <em>k</em> <span><math><mo>(</mo><mo>≤</mo><mi>n</mi><mo>)</mo></math></span>, we wish to find a subset <em>S</em> of <em>P</em> of size <em>k</em> such that the cost of a subset <em>S</em>, <span><math><mi>c</mi><mi>o</mi><mi>s</mi><mi>t</mi><mo>(</mo><mi>S</mi><mo>)</mo><mo>=</mo><mi>min</mi><mo></mo><mo>{</mo><mi>d</mi><mo>(</mo><mi>p</mi><mo>,</mo><mi>q</mi><mo>)</mo><mo>|</mo><mi>p</mi><mo>,</mo><mi>q</mi><mo>∈</mo><mi>S</mi><mo>}</mo></math></span>, is maximized, where <span><math><mi>d</mi><mo>(</mo><mi>p</mi><mo>,</mo><mi>q</mi><mo>)</mo></math></span> is the Euclidean distance between two points <em>p</em> and <em>q</em>. The problem is called the <em>max-min k-dispersion problem</em>. In this article, we consider the max-min <em>k</em>-dispersion problem, where a given set <em>P</em> of <em>n</em> points are vertices of a convex polygon. We refer to this variant of the problem as the <em>convex k-dispersion</em> problem.</p><p>We propose an 1.733-factor approximation algorithm for the convex <em>k</em>-dispersion problem. In addition, we study the convex <em>k</em>-dispersion problem for <span><math><mi>k</mi><mo>=</mo><mn>4</mn></math></span>. We propose an iterative algorithm that returns an optimal solution of size 4 in <span><math><mi>O</mi><mo>(</mo><msup><mrow><mi>n</mi></mrow><mrow><mn>3</mn></mrow></msup><mo>)</mo></math></span> time.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"187 ","pages":"Article 106498"},"PeriodicalIF":0.5,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140843117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Théodore Lopez, Benjamin Monmege, Jean-Marc Talbot
{"title":"Regular D-length: A tool for improved prefix-stable forward Ramsey factorisations","authors":"Théodore Lopez, Benjamin Monmege, Jean-Marc Talbot","doi":"10.1016/j.ipl.2024.106497","DOIUrl":"https://doi.org/10.1016/j.ipl.2024.106497","url":null,"abstract":"<div><p>Recently, Jecker has introduced and studied the regular <span><math><mi>D</mi></math></span>-length of a monoid, as the length of its longest chain of regular <span><math><mi>D</mi></math></span>-classes. We use this parameter in order to improve the construction, originally proposed by Colcombet, of a deterministic automaton that allows to map a word to one of its forward Ramsey splits: these are a relaxation of factorisation forests that enjoy prefix stability, thus allowing a compositional construction. For certain monoids that have a small regular <span><math><mi>D</mi></math></span>-length, our construction produces an exponentially more succinct deterministic automaton. Finally, we apply it to obtain better complexity result for the problem of fast infix evaluation.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"187 ","pages":"Article 106497"},"PeriodicalIF":0.5,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140647631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Correcting matrix products over the ring of integers","authors":"Yu-Lun Wu, Hung-Lung Wang","doi":"10.1016/j.ipl.2024.106496","DOIUrl":"10.1016/j.ipl.2024.106496","url":null,"abstract":"<div><p>Let <em>A</em>, <em>B</em>, and <em>C</em> be three <span><math><mi>n</mi><mo>×</mo><mi>n</mi></math></span> matrices. We investigate the problem of verifying whether <span><math><mi>A</mi><mi>B</mi><mo>=</mo><mi>C</mi></math></span> over the ring of integers and finding the correct product <em>AB</em>. Given that <em>C</em> is different from <em>AB</em> by at most <em>k</em> entries, we propose an algorithm that uses <span><math><mi>O</mi><mo>(</mo><msqrt><mrow><mi>k</mi></mrow></msqrt><msup><mrow><mi>n</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>+</mo><msup><mrow><mi>k</mi></mrow><mrow><mn>2</mn></mrow></msup><mi>n</mi><mo>)</mo></math></span> operations. Let <em>α</em> be the largest absolute value of an entry in <em>A</em>, <em>B</em>, and <em>C</em>. The integers involved in the computation are of <span><math><mi>O</mi><mo>(</mo><msup><mrow><mi>n</mi></mrow><mrow><mn>3</mn></mrow></msup><msup><mrow><mi>α</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>)</mo></math></span>.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106496"},"PeriodicalIF":0.5,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140616211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A linear-time algorithm for the center problem in weighted cycle graphs","authors":"Taekang Eom , Hee-Kap Ahn","doi":"10.1016/j.ipl.2024.106495","DOIUrl":"https://doi.org/10.1016/j.ipl.2024.106495","url":null,"abstract":"<div><p>We study the problem of computing the center of cycle graphs whose vertices are weighted. The distance from a vertex to a point of the graph is defined as the weight of the vertex times the length of the shortest path between the vertex and the point. The weighted center of the graph is a point of the graph such that the maximum distance of the vertices of the graph to the point is minimum among all points of the graph. We present an <span><math><mi>O</mi><mo>(</mo><mi>n</mi><mo>)</mo></math></span>-time algorithm for the discrete and continuous weighted center problem on cycle graphs with <em>n</em> vertices. Our algorithm improves upon the best known algorithm that takes <span><math><mi>O</mi><mo>(</mo><mi>n</mi><mi>log</mi><mo></mo><mi>n</mi><mo>)</mo></math></span> time. Moreover, it is optimal for the weighted center problem of cycle graphs.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106495"},"PeriodicalIF":0.5,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140540027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The autocorrelation of a class of quaternary sequences of length pq with high complexity","authors":"Feifei Yan , Pinhui Ke , Zuling Chang","doi":"10.1016/j.ipl.2024.106494","DOIUrl":"https://doi.org/10.1016/j.ipl.2024.106494","url":null,"abstract":"<div><p>Recently, a class of quaternary sequences with period <em>pq</em>, where <em>p</em> and <em>q</em> are two distinct odd primes introduced by Zhang et al. were proved to possess high linear complexity and 4-adic complexity. In this paper, we determine the autocorrelation distribution of this class of quaternary sequence. Our results indicate that the studied quaternary sequence are weak with respect to the correlation property.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106494"},"PeriodicalIF":0.5,"publicationDate":"2024-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140321256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Branching bisimulation semantics for quantum processes","authors":"Hao Wu , Qizhe Yang , Huan Long","doi":"10.1016/j.ipl.2024.106492","DOIUrl":"https://doi.org/10.1016/j.ipl.2024.106492","url":null,"abstract":"<div><p>The qCCS model proposed by Feng et al. provides a powerful framework to describe and reason about quantum communication systems that could be entangled with the environment. However, they only studied weak bisimulation semantics. In this paper we propose a new branching bisimilarity for qCCS and show that it is a congruence. The new bisimilarity is based on the concept of <em>ϵ</em>-tree and preserves the branching structure of concurrent processes where both quantum and classical components are allowed. Furthermore, we present a polynomial time equivalence checking algorithm for the ground version of our branching bisimilarity.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106492"},"PeriodicalIF":0.5,"publicationDate":"2024-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140163830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Smaller kernels for two vertex deletion problems","authors":"Dekel Tsur","doi":"10.1016/j.ipl.2024.106493","DOIUrl":"https://doi.org/10.1016/j.ipl.2024.106493","url":null,"abstract":"<div><p>In this paper we consider two vertex deletion problems. In the <span>Block Vertex Deletion</span> problem, the input is a graph <em>G</em> and an integer <em>k</em>, and the goal is to decide whether there is a set of at most <em>k</em> vertices whose removal from <em>G</em> result in a block graph (a graph in which every biconnected component is a clique). In the <span>Pathwidth One Vertex Deletion</span> problem, the input is a graph <em>G</em> and an integer <em>k</em>, and the goal is to decide whether there is a set of at most <em>k</em> vertices whose removal from <em>G</em> result in a graph with pathwidth at most one. We give a kernel for <span>Block Vertex Deletion</span> with <span><math><mi>O</mi><mo>(</mo><msup><mrow><mi>k</mi></mrow><mrow><mn>3</mn></mrow></msup><mo>)</mo></math></span> vertices and a kernel for <span>Pathwidth One Vertex Deletion</span> with <span><math><mi>O</mi><mo>(</mo><msup><mrow><mi>k</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>)</mo></math></span> vertices. Our results improve the previous <span><math><mi>O</mi><mo>(</mo><msup><mrow><mi>k</mi></mrow><mrow><mn>4</mn></mrow></msup><mo>)</mo></math></span>-vertex kernel for <span>Block Vertex Deletion</span> (Agrawal et al., 2016 <span>[1]</span>) and the <span><math><mi>O</mi><mo>(</mo><msup><mrow><mi>k</mi></mrow><mrow><mn>3</mn></mrow></msup><mo>)</mo></math></span>-vertex kernel for <span>Pathwidth One Vertex Deletion</span> (Cygan et al., 2012 <span>[3]</span>).</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106493"},"PeriodicalIF":0.5,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140160714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Long directed detours: Reduction to 2-Disjoint Paths","authors":"Ashwin Jacob, Michał Włodarczyk, Meirav Zehavi","doi":"10.1016/j.ipl.2024.106491","DOIUrl":"10.1016/j.ipl.2024.106491","url":null,"abstract":"<div><p>In the <span>Longest</span> <span><math><mo>(</mo><mi>s</mi><mo>,</mo><mi>t</mi><mo>)</mo></math></span><span>-Detour</span> problem, we look for an <span><math><mo>(</mo><mi>s</mi><mo>,</mo><mi>t</mi><mo>)</mo></math></span>-path that is at least <em>k</em> vertices longer than a shortest one. We study the parameterized complexity of <span>Longest</span> <span><math><mo>(</mo><mi>s</mi><mo>,</mo><mi>t</mi><mo>)</mo></math></span><span>-Detour</span> when parameterized by <em>k</em>: this falls into the research paradigm of ‘parameterization above guarantee’. Whereas the problem is known to be fixed-parameter tractable (FPT) on undirected graphs, the status of <span>Longest</span> <span><math><mo>(</mo><mi>s</mi><mo>,</mo><mi>t</mi><mo>)</mo></math></span><span>-Detour</span> on directed graphs remains highly unclear: it is not even known to be solvable in polynomial time for <span><math><mi>k</mi><mo>=</mo><mn>1</mn></math></span>. Recently, Fomin et al. made progress in this direction by showing that the problem is FPT on every class of directed graphs where the <span>3-Disjoint Paths</span> problem is solvable in polynomial time. We improve upon their result by weakening this assumption: we show that only a polynomial-time algorithm for <span>2-Disjoint Paths</span> is required.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106491"},"PeriodicalIF":0.5,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140153962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bhisham Dev Verma , Rameshwar Pratap , Punit Pankaj Dubey
{"title":"Sparsifying Count Sketch","authors":"Bhisham Dev Verma , Rameshwar Pratap , Punit Pankaj Dubey","doi":"10.1016/j.ipl.2024.106490","DOIUrl":"https://doi.org/10.1016/j.ipl.2024.106490","url":null,"abstract":"<div><p>The seminal work of Charikar et al. <span>[1]</span> called <span>Count-Sketch</span> suggests a sketching algorithm for real-valued vectors that has been used in frequency estimation for data streams and pairwise inner product estimation for real-valued vectors etc. One of the major advantages of <span>Count-Sketch</span> over other similar sketching algorithms, such as random projection, is that its running time, as well as the sparsity of sketch, depends on the sparsity of the input. Therefore, sparse datasets enjoy space-efficient (sparse sketches) and faster running time. However, on dense datasets, these advantages of <span>Count-Sketch</span> might be negligible over other baselines. In this work, we address this challenge by suggesting a simple and effective approach that outputs (asymptotically) a sparser sketch than that obtained via <span>Count-Sketch</span>, and as a by-product, we also achieve a faster running time. Simultaneously, the quality of our estimate is closely approximate to that of <span>Count-Sketch</span>. For frequency estimation and pairwise inner product estimation problems, our proposal <span>Sparse-Count-Sketch</span> provides unbiased estimates. These estimations, however, have slightly higher variances than their respective estimates obtained via <span>Count-Sketch</span>. To address this issue, we present improved estimators for these problems based on maximum likelihood estimation (MLE) that offer smaller variances even <em>w.r.t.</em> <span>Count-Sketch</span>. We suggest a rigorous theoretical analysis of our proposal for frequency estimation for data streams and pairwise inner product estimation for real-valued vectors.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106490"},"PeriodicalIF":0.5,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140042110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}