{"title":"New Direct Sum Tests","authors":"Alek Westover, Edward Yu, Kai Zheng","doi":"arxiv-2409.10464","DOIUrl":"https://doi.org/arxiv-2409.10464","url":null,"abstract":"A function $f:[n]^{d} to mathbb{F}_2$ is a defn{direct sum} if there are\u0000functions $L_i:[n]to mathbb{F}_2$ such that ${f(x) = sum_{i}L_i(x_i)}$. In\u0000this work we give multiple results related to the property testing of direct\u0000sums. Our first result concerns a test proposed by Dinur and Golubev in 2019. We\u0000call their test the Diamond test and show that it is indeed a direct sum\u0000tester. More specifically, we show that if a function $f$ is $epsilon$-far\u0000from being a direct sum function, then the Diamond test rejects $f$ with\u0000probability at least $Omega_{n,epsilon}(1)$. Even in the case of $n = 2$, the\u0000Diamond test is, to the best of our knowledge, novel and yields a new tester\u0000for the classic property of affinity. Apart from the Diamond test, we also analyze a broad family of direct sum\u0000tests, which at a high level, run an arbitrary affinity test on the restriction\u0000of $f$ to a random hypercube inside of $[n]^d$. This family of tests includes\u0000the direct sum test analyzed in cite{di19}, but does not include the Diamond\u0000test. As an application of our result, we obtain a direct sum test which works\u0000in the online adversary model of cite{KRV}. Finally, we also discuss a Fourier analytic interpretation of the diamond\u0000tester in the $n=2$ case, as well as prove local correction results for direct\u0000sum as conjectured by Dinur and Golubev.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"64 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142264887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Complexity and algorithms for Swap median and relation to other consensus problems","authors":"Luís Cunha, Thiago Lopes, Arnaud Mary","doi":"arxiv-2409.09734","DOIUrl":"https://doi.org/arxiv-2409.09734","url":null,"abstract":"Genome rearrangements are events in which large blocks of DNA exchange pieces\u0000during evolution. The analysis of such events is a tool for understanding\u0000evolutionary genomics, based on finding the minimum number of rearrangements to\u0000transform one genome into another. In a general scenario, more than two genomes\u0000are considered and we have new challenges. The {sc Median} problem consists in\u0000finding, given three permutations and a distance metric, a permutation $s$ that\u0000minimizes the sum of the distances between $s$ and each input. We study the\u0000{sc median} problem over emph{swap} distances in permutations, for which the\u0000computational complexity has been open for almost 20 years (Eriksen,\u0000emph{Theor. Compt. Sci.}, 2007). We consider this problem through some\u0000branches. We associate median solutions and interval convex sets, where the\u0000concept of graph convexity inspires the following investigation: Does a median\u0000permutation belong to every shortest path between one of the pairs of input\u0000permutations? We are able to partially answer this question, and as a\u0000by-product we solve a long open problem by proving that the {sc Swap Median}\u0000problem is NP-hard. Furthermore, using a similar approach, we show that the\u0000{sc Closest} problem, which seeks to minimize the maximum distance between the\u0000solution and the input permutations, is NP-hard even considering three input\u0000permutations. This gives a sharp dichotomy into the P vs. NP-hard approaches,\u0000since considering two input permutations the problem is easily solvable and\u0000considering any number of input permutations it is known to be NP-hard since\u00002007 (Popov, emph{Theor. Compt. Sci.}, 2007). In addition, we show that {sc\u0000Swap Median} and {sc Swap Closest} are APX-hard problems.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142264891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seth C. Lewis, David M. Markowitz, Jon Benedik Bunquin
{"title":"Journalists, Emotions, and the Introduction of Generative AI Chatbots: A Large-Scale Analysis of Tweets Before and After the Launch of ChatGPT","authors":"Seth C. Lewis, David M. Markowitz, Jon Benedik Bunquin","doi":"arxiv-2409.08761","DOIUrl":"https://doi.org/arxiv-2409.08761","url":null,"abstract":"As part of a broader look at the impact of generative AI, this study\u0000investigated the emotional responses of journalists to the release of ChatGPT\u0000at the time of its launch. By analyzing nearly 1 million Tweets from\u0000journalists at major U.S. news outlets, we tracked changes in emotional tone\u0000and sentiment before and after the introduction of ChatGPT in November 2022.\u0000Using various computational and natural language processing techniques to\u0000measure emotional shifts in response to ChatGPT's release, we found an increase\u0000in positive emotion and a more favorable tone post-launch, suggesting initial\u0000optimism toward AI's potential. This research underscores the pivotal role of\u0000journalists as interpreters of technological innovation and disruption,\u0000highlighting how their emotional reactions may shape public narratives around\u0000emerging technologies. The study contributes to understanding the intersection\u0000of journalism, emotion, and AI, offering insights into the broader societal\u0000impact of generative AI tools.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"54 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142264888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sagar Bisoyi, Krishnamoorthy Dinesh, Bhabya Deep Rai, Jayalal Sarma
{"title":"Almost-catalytic Computation","authors":"Sagar Bisoyi, Krishnamoorthy Dinesh, Bhabya Deep Rai, Jayalal Sarma","doi":"arxiv-2409.07208","DOIUrl":"https://doi.org/arxiv-2409.07208","url":null,"abstract":"Designing algorithms for space bounded models with restoration requirements\u0000on the space used by the algorithm is an important challenge posed about the\u0000catalytic computation model introduced by Buhrman et al. (2014). Motivated by\u0000the scenarios where we do not need to restore unless is useful, we define\u0000$ACL(A)$ to be the class of languages that can be accepted by almost-catalytic\u0000Turing machines with respect to $A$ (which we call the catalytic set), that\u0000uses at most $clog n$ work space and $n^c$ catalytic space. We show that if there are almost-catalytic algorithms for a problem with\u0000catalytic set as $A subseteq Sigma^*$ and its complement respectively, then\u0000the problem can be solved by a ZPP algorithm. Using this, we derive that to\u0000design catalytic algorithms, it suffices to design almost-catalytic algorithms\u0000where the catalytic set is the set of strings of odd weight ($PARITY$). Towards\u0000this, we consider two complexity measures of the set $A$ which are maximized\u0000for $PARITY$ - random projection complexity (${cal R}(A)$) and the subcube\u0000partition complexity (${cal P}(A)$). By making use of error-correcting codes, we show that for all $k ge 1$,\u0000there is a language $A_k subseteq Sigma^*$ such that $DSPACE(n^k) subseteq\u0000ACL(A_k)$ where for every $m ge 1$, $mathcal{R}(A_k cap {0,1}^m) ge\u0000frac{m}{4}$ and $mathcal{P}(A_k cap {0,1}^m)=2^{m/4}$. This contrasts the\u0000catalytic machine model where it is unclear if it can accept all languages in\u0000$DSPACE(log^{1+epsilon} n)$ for any $epsilon > 0$. Improving the partition complexity of the catalytic set $A$ further, we show\u0000that for all $k ge 1$, there is a $A_k subseteq {0,1}^*$ such that\u0000$mathsf{DSPACE}(log^k n) subseteq ACL(A_k)$ where for every $m ge 1$,\u0000$mathcal{R}(A_k cap {0,1}^m) ge frac{m}{4}$ and $mathcal{P}(A_k cap\u0000{0,1}^m)=2^{m/4+Omega(log m)}$.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast Simulation of Cellular Automata by Self-Composition","authors":"Joseph Natal, Oleksiy Al-saadi","doi":"arxiv-2409.07065","DOIUrl":"https://doi.org/arxiv-2409.07065","url":null,"abstract":"It is shown that computing the configuration of any one-dimensional cellular\u0000automaton at generation $n$ can be accelerated by constructing and running a\u0000composite one with a radius proportional to $log n$. The new automaton is the\u0000original automaton whose local rule function is composed with itself. The\u0000asymptotic time complexity to compute the configuration of generation $n$ is\u0000reduced from $O(n^2)$ operations to $O(n^2 / log n)$ on a given machine with\u0000$O(n^2)$ memory usage. Experimental results are given in the case of Rule 30.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marten Folkertsma, Ian Mertz, Florian Speelman, Quinten Tupker
{"title":"Fully Characterizing Lossy Catalytic Computation","authors":"Marten Folkertsma, Ian Mertz, Florian Speelman, Quinten Tupker","doi":"arxiv-2409.05046","DOIUrl":"https://doi.org/arxiv-2409.05046","url":null,"abstract":"A catalytic machine is a model of computation where a traditional\u0000space-bounded machine is augmented with an additional, significantly larger,\u0000\"catalytic\" tape, which, while being available as a work tape, has the caveat\u0000of being initialized with an arbitrary string, which must be preserved at the\u0000end of the computation. Despite this restriction, catalytic machines have been\u0000shown to have surprising additional power; a logspace machine with a polynomial\u0000length catalytic tape, known as catalytic logspace ($CL$), can compute problems\u0000which are believed to be impossible for $L$. A fundamental question of the model is whether the catalytic condition, of\u0000leaving the catalytic tape in its exact original configuration, is robust to\u0000minor deviations. This study was initialized by Gupta et al. (2024), who\u0000defined lossy catalytic logspace ($LCL[e]$) as a variant of $CL$ where we allow\u0000up to $e$ errors when resetting the catalytic tape. They showed that $LCL[e] =\u0000CL$ for any $e = O(1)$, which remains the frontier of our understanding. In this work we completely characterize lossy catalytic space\u0000($LCSPACE[s,c,e]$) in terms of ordinary catalytic space ($CSPACE[s,c]$). We\u0000show that $$LCSPACE[s,c,e] = CSPACE[Theta(s + e log c), Theta(c)]$$ In other\u0000words, allowing $e$ errors on a catalytic tape of length $c$ is equivalent, up\u0000to a constant stretch, to an equivalent errorless catalytic machine with an\u0000additional $e log c$ bits of ordinary working memory. As a consequence, we show that for any $e$, $LCL[e] = CL$ implies $SPACE[e\u0000log n] subseteq ZPP$, thus giving a barrier to any improvement beyond\u0000$LCL[O(1)] = CL$. We also show equivalent results for non-deterministic and\u0000randomized catalytic space.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Quantum Pigeonhole Principle and Two Semidefinite Relaxations of Communication Complexity","authors":"Pavel Dvořák, Bruno Loff, Suhail Sherif","doi":"arxiv-2409.04592","DOIUrl":"https://doi.org/arxiv-2409.04592","url":null,"abstract":"We study semidefinite relaxations of $Pi_1$ combinatorial statements. By\u0000relaxing the pigeonhole principle, we obtain a new \"quantum\" pigeonhole\u0000principle which is a stronger statement. By relaxing statements of the form\u0000\"the communication complexity of $f$ is $> k$\", we obtain new communication\u0000models, which we call \"$gamma_2$ communication\" and \"quantum-lab protocols\".\u0000We prove, via an argument from proof complexity, that any natural model\u0000obtained by such a relaxation must solve all Karchmer--Wigderson games\u0000efficiently. However, the argument is not constructive, so we work to\u0000explicitly construct such protocols in these two models.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Two-Sided Lossless Expanders in the Unbalanced Setting","authors":"Eshan Chattopadhyay, Mohit Gurumukhani, Noam Ringach, Yunya Zhao","doi":"arxiv-2409.04549","DOIUrl":"https://doi.org/arxiv-2409.04549","url":null,"abstract":"We present the first explicit construction of two-sided lossless expanders in\u0000the unbalanced setting (bipartite graphs that have many more nodes on the left\u0000than on the right). Prior to our work, all known explicit constructions in the\u0000unbalanced setting achieved only one-sided lossless expansion. Specifically, we show that the one-sided lossless expanders constructed by\u0000Kalev and Ta-Shma (RANDOM'22) -- that are based on multiplicity codes\u0000introduced by Kopparty, Saraf, and Yekhanin (STOC'11) -- are, in fact,\u0000two-sided lossless expanders. Using our unbalanced bipartite expander, we easily obtain lossless\u0000(non-bipartite) expander graphs with high degree and a free group action. As\u0000far as we know, this is the first explicit construction of lossless\u0000(non-bipartite) expanders with $N$ vertices and degree $ll N$.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"225 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Query complexity lower bounds for local list-decoding and hard-core predicates (even for small rate and huge lists)","authors":"Noga Ron-Zewi, Ronen Shaltiel, Nithin Varma","doi":"arxiv-2409.01708","DOIUrl":"https://doi.org/arxiv-2409.01708","url":null,"abstract":"A binary code Enc$:{0,1}^k to {0,1}^n$ is $(0.5-epsilon,L)$-list\u0000decodable if for all $w in {0,1}^n$, the set List$(w)$ of all messages $m\u0000in {0,1}^k$ such that the relative Hamming distance between Enc$(m)$ and $w$\u0000is at most $0.5 -epsilon$, has size at most $L$. Informally, a $q$-query local\u0000list-decoder for Enc is a randomized procedure Dec$:[k]times [L] to {0,1}$\u0000that when given oracle access to a string $w$, makes at most $q$ oracle calls,\u0000and for every message $m in text{List}(w)$, with high probability, there\u0000exists $j in [L]$ such that for every $i in [k]$, with high probability,\u0000Dec$^w(i,j)=m_i$. We prove lower bounds on $q$, that apply even if $L$ is huge (say\u0000$L=2^{k^{0.9}}$) and the rate of Enc is small (meaning that $n ge 2^{k}$): 1. For $epsilon geq 1/k^{nu}$ for some universal constant $0< nu < 1$, we\u0000prove a lower bound of $q=Omega(frac{log(1/delta)}{epsilon^2})$, where\u0000$delta$ is the error probability of the local list-decoder. This bound is\u0000tight as there is a matching upper bound by Goldreich and Levin (STOC 1989) of\u0000$q=O(frac{log(1/delta)}{epsilon^2})$ for the Hadamard code (which has\u0000$n=2^k$). This bound extends an earlier work of Grinberg, Shaltiel and Viola\u0000(FOCS 2018) which only works if $n le 2^{k^{gamma}}$ for some universal\u0000constant $0<gamma <1$, and the number of coins tossed by Dec is small (and\u0000therefore does not apply to the Hadamard code, or other codes with low rate). 2. For smaller $epsilon$, we prove a lower bound of roughly $q =\u0000Omega(frac{1}{sqrt{epsilon}})$. To the best of our knowledge, this is the\u0000first lower bound on the number of queries of local list-decoders that gives $q\u0000ge k$ for small $epsilon$. We also prove black-box limitations for improving some of the parameters of\u0000the Goldreich-Levin hard-core predicate construction.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Partial and weighted matrix multiplication","authors":"Péter Vrana","doi":"arxiv-2408.15728","DOIUrl":"https://doi.org/arxiv-2408.15728","url":null,"abstract":"In a paper published in 1981, Sch\"onhage showed that large total matrix\u0000multiplications can be reduced to powers of partial matrix multiplication\u0000tensors, which correspond to the bilinear computation task of multiplying\u0000matrices with some of the entries fixed to be zero. It was left as an open\u0000problem to generalize the method to the case when the multiplication is also\u0000partial in the sense that only a subset of the entries need to be computed. We\u0000prove a variant of a more general case: reducing large weighted matrix\u0000multiplications to tensor powers of a partial matrix multiplication in the\u0000sense that every entry of the result is a partial version of the inner product\u0000of the corresponding row and column of the factors that would appear in the\u0000usual matrix product. The implication is that support rank upper bounds on\u0000partial matrix multiplication tensors in this general sense give upper bounds\u0000on the support rank exponent of matrix multiplication.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"67 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}