{"title":"The Spanning Trees Formulas in a Class of Double Fixed-Step Loop Networks","authors":"T. Atajan, N. Otsuka, Xuerong Yong","doi":"10.1137/1.9781611972993.3","DOIUrl":"https://doi.org/10.1137/1.9781611972993.3","url":null,"abstract":"A double fixed-step loop network, Cp,q, is a digraph on n vertices 0, 1, 2, ..., n − 1 and for each vertex i (0 < i ≤ n − 1), there are exactly two arcs leaving from vertex i to vertices i + p, i + q (mod n). In this paper, we first derive an expression formula of elementary symmetric polynomials as polynomials in sums of powers then, by using this, for any positive integers p, q, n with p < q < n, an explicit formula for counting the number of spanning trees in a class of double fixed-step loop networks with constant or nonconstant jumps. We allso find two classes of networks that share the same number of spanning trees and we, finally, prove that the number of spanning trees can be approximated by a formula which is based on the mth order Fibonacci numbers. In some special cases, our results generate the formulas obtained in [15],[19],[20]. And, compared with the previous work, the advantage is that, for any jumps p, q, the number of spanning trees can be calculated directly, without establishing the recurrence relation of order 2q−1.","PeriodicalId":340112,"journal":{"name":"Workshop on Analytic Algorithmics and Combinatorics","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117319371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mathematics and Computer Science Serving/Impacting Bioinformatics","authors":"G. Gonnet","doi":"10.1137/1.9781611972993.5","DOIUrl":"https://doi.org/10.1137/1.9781611972993.5","url":null,"abstract":"Since the early days of Bioinformatics, it was clear that mathematics in general, and computer science in particular, have a lot to contribute to bioinformatics. Bioinformatics has made substantial progress in the last 20 years, using tools from computer science, mathematics and statistics. \u0000 \u0000As it will be seen there is plenty of work for the algorithms community.","PeriodicalId":340112,"journal":{"name":"Workshop on Analytic Algorithmics and Combinatorics","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132282049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Maximum Likelihood Analysis of Heapsort","authors":"Ulrich Laube, M. Nebel","doi":"10.1137/1.9781611972993.7","DOIUrl":"https://doi.org/10.1137/1.9781611972993.7","url":null,"abstract":"We present a new approach for an average-cases analysis of algorithms that supports a non-uniform distribution of the inputs and is based on the maximum likelihood training of stochastic grammars. The approach is exemplified by an analysis of the average running time of heapsort. All but one step of our analysis can be automated on top of a computer-algebra system. Thus our new approach eases the effort required for an average-case analysis exceptionally allowing for the consideration of realistic input distributions with unknown distribution functions at the same time.","PeriodicalId":340112,"journal":{"name":"Workshop on Analytic Algorithmics and Combinatorics","volume":"1007 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120876874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Average-case Analysis of Moves in Quick Select","authors":"H. Mahmoud","doi":"10.1137/1.9781611972993.6","DOIUrl":"https://doi.org/10.1137/1.9781611972993.6","url":null,"abstract":"We investigate the average number of moves made by Quick Select (a variant of Quick Sort for finding order statistics) to find an element with a randomly selected rank. This kind of grand average provides smoothing over all individual cases of a specific fixed order statistic. The variance of the number of moves involves intricate dependencies, and we only give reasonably tight bounds.","PeriodicalId":340112,"journal":{"name":"Workshop on Analytic Algorithmics and Combinatorics","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126800144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Balanced And/Or Trees and Linear Threshold Functions","authors":"Hervé Fournier, Danièle Gardy, Antoine Genitrini","doi":"10.1137/1.9781611972993.8","DOIUrl":"https://doi.org/10.1137/1.9781611972993.8","url":null,"abstract":"We consider random balanced Boolean formulas, built on the two connectives and and or, and a fixed number of variables. The probability distribution induced on Boolean functions is shown to have a limit when letting the depth of these formulas grow to infinity. By investigating how this limiting distribution depends on the two underlying probability distributions, over the connectives and over the Boolean variables, we prove that its support is made of linear threshold functions, and give the speed of convergence towards this limiting distribution.","PeriodicalId":340112,"journal":{"name":"Workshop on Analytic Algorithmics and Combinatorics","volume":"12 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114010584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pursuit and Evasion from a Distance: Algorithms and Bounds","authors":"A. Bonato, E. Chiniforooshan","doi":"10.1137/1.9781611972993.1","DOIUrl":"https://doi.org/10.1137/1.9781611972993.1","url":null,"abstract":"Cops and Robber is a pursuit and evasion game played on graphs that has received much attention. We consider an extension of Cops and Robber, distance k Cops and Robber, where the cops win if they are distance at most k from the robber in G. The cop number of a graph G is the minimum number of cops needed to capture the robber in G. The distance k analogue of the cop number, written ck(G), equals the minimum number of cops needed to win at a given distance k. We supply a classification result for graphs with bounded ck(G) values and develop an O(n2s+3) algorithm for determining if ck(G) ≤ s. In the case k = 0, our algorithm is faster than previously known algorithms. Upper and lower bounds are found for ck(G) in terms of the order of G. We prove that \u0000 \u0000[EQUATION] \u0000 \u0000where ck(n) is the maximum of ck(G) over all n-node connected graphs.","PeriodicalId":340112,"journal":{"name":"Workshop on Analytic Algorithmics and Combinatorics","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115511432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Approximating L1-distances Between Mixture Distributions Using Random Projections","authors":"Satyaki Mahalanabis, Daniel Stefankovic","doi":"10.1137/1.9781611972993.11","DOIUrl":"https://doi.org/10.1137/1.9781611972993.11","url":null,"abstract":"We consider the problem of computing L1-distances between every pair of probability densities from a given family, a problem motivated by density estimation [15]. We point out that the technique of Cauchy random projections [10] in this context turns into stochastic integrals with respect to Cauchy motion. \u0000 \u0000For piecewise-linear densities these integrals can be sampled from if one can sample from the stochastic integral of the function x → (1, x). We give an explicit density function for this stochastic integral and present an efficient (exact) sampling algorithm. As a consequence we obtain an efficient algorithm to approximate the L1-distances with a small relative error. \u0000 \u0000For piecewise-polynomial densities we show how to approximately sample from the distributions resulting from the stochastic integrals. This also results in an efficient algorithm to approximate the L1-distances, although our inability to get exact samples worsens the dependence on the parameters.","PeriodicalId":340112,"journal":{"name":"Workshop on Analytic Algorithmics and Combinatorics","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132720281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Markovian Embeddings of General Random Strings","authors":"M. Lladser","doi":"10.1137/1.9781611972986.2","DOIUrl":"https://doi.org/10.1137/1.9781611972986.2","url":null,"abstract":"Let A be a finite set and X a sequence of A-valued random variables. We do not assume any particular correlation structure between these random variables; in particular, X may be a non-Markovian sequence. An adapted embedding of X is a sequence of the form R(X1), R(X1,X2), R(X1,X2,X3), etc where R is a transformation defined over finite length sequences. In this extended abstract we characterize a wide class of adapted embeddings of X that result in a first-order homogeneous Markov chain. We show that any transformation R has a unique coarsest refinement R' in this class such that R'(X1), R'(X1,X2), R'(X1,X2,X3), etc is Markovian. (By refinement we mean that R'(u) = R'(v) implies R(u) = R(v), and by coarsest refinement we mean that R' is a deterministic function of any other refinement of R in our class of transformations.) We propose a specific embedding that we denote as RX which is particularly amenable for analyzing the occurrence of patterns described by regular expressions in X. A toy example of a non-Markovian sequence of 0's and 1's is analyzed thoroughly: discrete asymptotic distributions are established for the number of occurrences of a certain regular pattern in X1, ..., Xn as n → ∞ whereas a Gaussian asymptotic distribution is shown to apply for another regular pattern.","PeriodicalId":340112,"journal":{"name":"Workshop on Analytic Algorithmics and Combinatorics","volume":"245 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129070829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the Convergence of Upper Bound Techniques for the Average Length of Longest Common Subsequences","authors":"G. S. Lueker","doi":"10.1137/1.9781611972986.1","DOIUrl":"https://doi.org/10.1137/1.9781611972986.1","url":null,"abstract":"It has long been known [2] that the average length of the longest common subsequence of two random strings of length n over an alphabet of size k is asymptotic to γkn for some constant γk depending on k. The value of these constants remains unknown, and a number of papers have proved upper and lower bounds on them. In particular, in [6] we used a modification of methods of [3, 4] for determining lower and upper bounds on γk, combined with large computer computations, to obtain improved bounds on γ2. The method of [6] involved a parameter h; empirically, increasing h increased the computation time but gave better upper bounds. Here we show, for arbitrary k, a sufficient condition for a parameterized method to produce a sequence of upper bounds approaching the true value of γk, and show that a generalization of the method of [6] meets this condition for all k ≥ 2. While [3, 4] do not explicitly discuss how to parameterize their method, which is based on a concept they call domination, to trade off the tightness of the bound vs. the amount of computation, we discuss a very natural parameterization of their method; for the case of alphabet size k = 2 we conjecture but do not prove that it also meets the sufficient condition and hence also yields a sequence of bounds that converges to the correct value of γ2. For k > 2, it does not meet our sufficient condition. Thus we leave open the question of whether some method based on the undominated collations of [3, 4] gives bounds converging to the correct value for any k ≥ 2.","PeriodicalId":340112,"journal":{"name":"Workshop on Analytic Algorithmics and Combinatorics","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129400143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Nearly Tight Bounds on the Encoding Length of the Burrows-Wheeler Transform","authors":"Ankur Gupta, R. Grossi, J. Vitter","doi":"10.1137/1.9781611972986.3","DOIUrl":"https://doi.org/10.1137/1.9781611972986.3","url":null,"abstract":"In this paper, we present a nearly tight analysis of the encoding length of the Burrows-Wheeler Transform (BWT) that is motivated by the text indexing setting. For a text T of n symbols drawn from an alphabet Σ, our encoding scheme achieves bounds in terms of the hth-order empirical entropy Hh of the text, and takes linear time for encoding and decoding. We also describe a lower bound on the encoding length of the BWT that constructs an infinite (non-trivial) class of texts that are among the hardest to compress using the BWT. We then show that our upper bound encoding length is nearly tight with this lower bound for the class of texts we described. \u0000 \u0000In designing our BWT encoding and its lower bound, we also address the t-subset problem; here, the goal is to store a subset of t items drawn from a universe [1..n] using just lg (nt)+O(1) bits of space. A number of solutions to this basic problem are known, however encoding or decoding usually requires either O(t) operations on large integers [Knu05, Rus05] or O(n) operations. We provide a novel approach to reduce the encoding/decoding time to just O(t) operations on small integers (of size O(lg n) bits), without increasing the space required.","PeriodicalId":340112,"journal":{"name":"Workshop on Analytic Algorithmics and Combinatorics","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132906976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}