Information Technology Convergence and Services最新文献

筛选
英文 中文
Communication Complexity of Inner Product in Symmetric Normed Spaces 对称赋范空间内积的通信复杂度
Information Technology Convergence and Services Pub Date : 2022-11-24 DOI: 10.48550/arXiv.2211.13473
Alexandr Andoni, Jarosław Błasiok, Arnold Filtser
{"title":"Communication Complexity of Inner Product in Symmetric Normed Spaces","authors":"Alexandr Andoni, Jarosław Błasiok, Arnold Filtser","doi":"10.48550/arXiv.2211.13473","DOIUrl":"https://doi.org/10.48550/arXiv.2211.13473","url":null,"abstract":"We introduce and study the communication complexity of computing the inner product of two vectors, where the input is restricted w.r.t. a norm $N$ on the space $mathbb{R}^n$. Here, Alice and Bob hold two vectors $v,u$ such that $|v|_Nle 1$ and $|u|_{N^*}le 1$, where $N^*$ is the dual norm. They want to compute their inner product $langle v,u rangle$ up to an $varepsilon$ additive term. The problem is denoted by $mathrm{IP}_N$. We systematically study $mathrm{IP}_N$, showing the following results: - For any symmetric norm $N$, given $|v|_Nle 1$ and $|u|_{N^*}le 1$ there is a randomized protocol for $mathrm{IP}_N$ using $tilde{mathcal{O}}(varepsilon^{-6} log n)$ bits -- we will denote this by $mathcal{R}_{varepsilon,1/3}(mathrm{IP}_{N}) leq tilde{mathcal{O}}(varepsilon^{-6} log n)$. - One way communication complexity $overrightarrow{mathcal{R}}(mathrm{IP}_{ell_p})leqmathcal{O}(varepsilon^{-max(2,p)}cdot logfrac nvarepsilon)$, and a nearly matching lower bound $overrightarrow{mathcal{R}}(mathrm{IP}_{ell_p}) geq Omega(varepsilon^{-max(2,p)})$ for $varepsilon^{-max(2,p)} ll n$. - One way communication complexity $overrightarrow{mathcal{R}}(N)$ for a symmetric norm $N$ is governed by embeddings $ell_infty^k$ into $N$. Specifically, while a small distortion embedding easily implies a lower bound $Omega(k)$, we show that, conversely, non-existence of such an embedding implies protocol with communication $k^{mathcal{O}(log log k)} log^2 n$. - For arbitrary origin symmetric convex polytope $P$, we show $mathcal{R}(mathrm{IP}_{N}) lemathcal{O}(varepsilon^{-2} log mathrm{xc}(P))$, where $N$ is the unique norm for which $P$ is a unit ball, and $mathrm{xc}(P)$ is the extension complexity of $P$.","PeriodicalId":123734,"journal":{"name":"Information Technology Convergence and Services","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117159392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Worst-Case to Expander-Case Reductions 从最坏的情况到扩展的情况
Information Technology Convergence and Services Pub Date : 2022-11-23 DOI: 10.48550/arXiv.2211.12833
Amir Abboud, Nathan Wallheimer
{"title":"Worst-Case to Expander-Case Reductions","authors":"Amir Abboud, Nathan Wallheimer","doi":"10.48550/arXiv.2211.12833","DOIUrl":"https://doi.org/10.48550/arXiv.2211.12833","url":null,"abstract":"In recent years, the expander decomposition method was used to develop many graph algorithms, resulting in major improvements to longstanding complexity barriers. This powerful hammer has led the community to (1) believe that most problems are as easy on worst-case graphs as they are on expanders, and (2) suspect that expander decompositions are the key to breaking the remaining longstanding barriers in fine-grained complexity. We set out to investigate the extent to which these two things are true (and for which problems). Towards this end, we put forth the concept of worst-case to expander-case self-reductions. We design a collection of such reductions for fundamental graph problems, verifying belief (1) for them. The list includes $k$-Clique, $4$-Cycle, Maximum Cardinality Matching, Vertex-Cover, and Minimum Dominating Set. Interestingly, for most (but not all) of these problems the proof is via a simple gadget reduction, not via expander decompositions, showing that this hammer is effectively useless against the problem and contradicting (2).","PeriodicalId":123734,"journal":{"name":"Information Technology Convergence and Services","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129172937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An Algorithmic Bridge Between Hamming and Levenshtein Distances Hamming距离和Levenshtein距离之间的一种算法桥梁
Information Technology Convergence and Services Pub Date : 2022-11-22 DOI: 10.48550/arXiv.2211.12496
Elazar Goldenberg, T. Kociumaka, Robert Krauthgamer, B. Saha
{"title":"An Algorithmic Bridge Between Hamming and Levenshtein Distances","authors":"Elazar Goldenberg, T. Kociumaka, Robert Krauthgamer, B. Saha","doi":"10.48550/arXiv.2211.12496","DOIUrl":"https://doi.org/10.48550/arXiv.2211.12496","url":null,"abstract":"The edit distance between strings classically assigns unit cost to every character insertion, deletion, and substitution, whereas the Hamming distance only allows substitutions. In many real-life scenarios, insertions and deletions (abbreviated indels) appear frequently but significantly less so than substitutions. To model this, we consider substitutions being cheaper than indels, with cost $1/a$ for a parameter $age 1$. This basic variant, denoted $ED_a$, bridges classical edit distance ($a=1$) with Hamming distance ($atoinfty$), leading to interesting algorithmic challenges: Does the time complexity of computing $ED_a$ interpolate between that of Hamming distance (linear time) and edit distance (quadratic time)? What about approximating $ED_a$? We first present a simple deterministic exact algorithm for $ED_a$ and further prove that it is near-optimal assuming the Orthogonal Vectors Conjecture. Our main result is a randomized algorithm computing a $(1+epsilon)$-approximation of $ED_a(X,Y)$, given strings $X,Y$ of total length $n$ and a bound $kge ED_a(X,Y)$. For simplicity, let us focus on $kge 1$ and a constant $epsilon>0$; then, our algorithm takes $tilde{O}(n/a + ak^3)$ time. Unless $a=tilde{O}(1)$ and for small enough $k$, this running time is sublinear in $n$. We also consider a very natural version that asks to find a $(k_I, k_S)$-alignment -- an alignment with at most $k_I$ indels and $k_S$ substitutions. In this setting, we give an exact algorithm and, more importantly, an $tilde{O}(nk_I/k_S + k_Scdot k_I^3)$-time $(1,1+epsilon)$-bicriteria approximation algorithm. The latter solution is based on the techniques we develop for $ED_a$ for $a=Theta(k_S / k_I)$. These bounds are in stark contrast to unit-cost edit distance, where state-of-the-art algorithms are far from achieving $(1+epsilon)$-approximation in sublinear time, even for a favorable choice of $k$.","PeriodicalId":123734,"journal":{"name":"Information Technology Convergence and Services","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128123427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Is this correct? Let's check! 这是正确的吗?让我们检查!
Information Technology Convergence and Services Pub Date : 2022-11-22 DOI: 10.48550/arXiv.2211.12301
Omri Ben-Eliezer, Dan Mikulincer, Elchanan Mossel, M. Sudan
{"title":"Is this correct? Let's check!","authors":"Omri Ben-Eliezer, Dan Mikulincer, Elchanan Mossel, M. Sudan","doi":"10.48550/arXiv.2211.12301","DOIUrl":"https://doi.org/10.48550/arXiv.2211.12301","url":null,"abstract":"Societal accumulation of knowledge is a complex process. The correctness of new units of knowledge depends not only on the correctness of new reasoning, but also on the correctness of old units that the new one builds on. The errors in such accumulation processes are often remedied by error correction and detection heuristics. Motivating examples include the scientific process based on scientific publications, and software development based on libraries of code. Natural processes that aim to keep errors under control, such as peer review in scientific publications, and testing and debugging in software development, would typically check existing pieces of knowledge -- both for the reasoning that generated them and the previous facts they rely on. In this work, we present a simple process that models such accumulation of knowledge and study the persistence (or lack thereof) of errors. We consider a simple probabilistic model for the generation of new units of knowledge based on the preferential attachment growth model, which additionally allows for errors. Furthermore, the process includes checks aimed at catching these errors. We investigate when effects of errors persist forever in the system (with positive probability) and when they get rooted out completely by the checking process. The two basic parameters associated with the checking process are the {em probability} of conducting a check and the depth of the check. We show that errors are rooted out if checks are sufficiently frequent and sufficiently deep. In contrast, shallow or infrequent checks are insufficient to root out errors.","PeriodicalId":123734,"journal":{"name":"Information Technology Convergence and Services","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125520201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exponential separations using guarded extension variables 使用受保护扩展变量的指数分离
Information Technology Convergence and Services Pub Date : 2022-11-22 DOI: 10.48550/arXiv.2211.12456
Emre Yolcu, Marijn J. H. Heule
{"title":"Exponential separations using guarded extension variables","authors":"Emre Yolcu, Marijn J. H. Heule","doi":"10.48550/arXiv.2211.12456","DOIUrl":"https://doi.org/10.48550/arXiv.2211.12456","url":null,"abstract":"We study the complexity of proof systems augmenting resolution with inference rules that allow, given a formula $Gamma$ in conjunctive normal form, deriving clauses that are not necessarily logically implied by $Gamma$ but whose addition to $Gamma$ preserves satisfiability. When the derived clauses are allowed to introduce variables not occurring in $Gamma$, the systems we consider become equivalent to extended resolution. We are concerned with the versions of these systems without new variables. They are called BC${}^-$, RAT${}^-$, SBC${}^-$, and GER${}^-$, denoting respectively blocked clauses, resolution asymmetric tautologies, set-blocked clauses, and generalized extended resolution. Each of these systems formalizes some restricted version of the ability to make assumptions that hold\"without loss of generality,\"which is commonly used informally to simplify or shorten proofs. Except for SBC${}^-$, these systems are known to be exponentially weaker than extended resolution. They are, however, all equivalent to it under a relaxed notion of simulation that allows the translation of the formula along with the proof when moving between proof systems. By taking advantage of this fact, we construct formulas that separate RAT${}^-$ from GER${}^-$ and vice versa. With the same strategy, we also separate SBC${}^-$ from RAT${}^-$. Additionally, we give polynomial-size SBC${}^-$ proofs of the pigeonhole principle, which separates SBC${}^-$ from GER${}^-$ by a previously known lower bound. These results also separate the three systems from BC${}^-$ since they all simulate it. We thus give an almost complete picture of their relative strengths.","PeriodicalId":123734,"journal":{"name":"Information Technology Convergence and Services","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130992913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantum algorithms and the power of forgetting 量子算法和遗忘的力量
Information Technology Convergence and Services Pub Date : 2022-11-22 DOI: 10.4230/LIPIcs.ITCS.2023.37
Andrew M. Childs, Matthew Coudron, Amin Shiraz Gilani
{"title":"Quantum algorithms and the power of forgetting","authors":"Andrew M. Childs, Matthew Coudron, Amin Shiraz Gilani","doi":"10.4230/LIPIcs.ITCS.2023.37","DOIUrl":"https://doi.org/10.4230/LIPIcs.ITCS.2023.37","url":null,"abstract":"The so-called welded tree problem provides an example of a black-box problem that can be solved exponentially faster by a quantum walk than by any classical algorithm. Given the name of a special ENTRANCE vertex, a quantum walk can find another distinguished EXIT vertex using polynomially many queries, though without finding any particular path from ENTRANCE to EXIT. It has been an open problem for twenty years whether there is an efficient quantum algorithm for finding such a path, or if the path-finding problem is hard even for quantum computers. We show that a natural class of efficient quantum algorithms provably cannot find a path from ENTRANCE to EXIT. Specifically, we consider algorithms that, within each branch of their superposition, always store a set of vertex labels that form a connected subgraph including the ENTRANCE, and that only provide these vertex labels as inputs to the oracle. While this does not rule out the possibility of a quantum algorithm that efficiently finds a path, it is unclear how an algorithm could benefit by deviating from this behavior. Our no-go result suggests that, for some problems, quantum algorithms must necessarily forget the path they take to reach a solution in order to outperform classical computation.","PeriodicalId":123734,"journal":{"name":"Information Technology Convergence and Services","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128386483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Quantum Majority Vote 量子多数投票
Information Technology Convergence and Services Pub Date : 2022-11-21 DOI: 10.4230/LIPIcs.ITCS.2023.29
H. Buhrman, N. Linden, L. Mančinska, A. Montanaro, M. Ozols
{"title":"Quantum Majority Vote","authors":"H. Buhrman, N. Linden, L. Mančinska, A. Montanaro, M. Ozols","doi":"10.4230/LIPIcs.ITCS.2023.29","DOIUrl":"https://doi.org/10.4230/LIPIcs.ITCS.2023.29","url":null,"abstract":"Majority vote is a basic method for amplifying correct outcomes that is widely used in computer science and beyond. While it can amplify the correctness of a quantum device with classical output, the analogous procedure for quantum output is not known. We introduce quantum majority vote as the following task: given a product state $|psi_1rangle otimes dots otimes |psi_nrangle$ where each qubit is in one of two orthogonal states $|psirangle$ or $|psi^perprangle$, output the majority state. We show that an optimal algorithm for this problem achieves worst-case fidelity of $1/2 + Theta(1/sqrt{n})$. Under the promise that at least $2/3$ of the input qubits are in the majority state, the fidelity increases to $1 - Theta(1/n)$ and approaches $1$ as $n$ increases. We also consider the more general problem of computing any symmetric and equivariant Boolean function $f: {0,1}^n to {0,1}$ in an unknown quantum basis, and show that a generalization of our quantum majority vote algorithm is optimal for this task. The optimal parameters for the generalized algorithm and its worst-case fidelity can be determined by a simple linear program of size $O(n)$. The time complexity of the algorithm is $O(n^4 log n)$ where $n$ is the number of input qubits.","PeriodicalId":123734,"journal":{"name":"Information Technology Convergence and Services","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130940176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Private Counting of Distinct and k-Occurring Items in Time Windows 对时间窗口中不同的和k个出现的项目进行私有计数
Information Technology Convergence and Services Pub Date : 2022-11-21 DOI: 10.48550/arXiv.2211.11718
Badih Ghazi, Ravi Kumar, Pasin Manurangsi, Jelani Nelson
{"title":"Private Counting of Distinct and k-Occurring Items in Time Windows","authors":"Badih Ghazi, Ravi Kumar, Pasin Manurangsi, Jelani Nelson","doi":"10.48550/arXiv.2211.11718","DOIUrl":"https://doi.org/10.48550/arXiv.2211.11718","url":null,"abstract":"In this work, we study the task of estimating the numbers of distinct and $k$-occurring items in a time window under the constraint of differential privacy (DP). We consider several variants depending on whether the queries are on general time windows (between times $t_1$ and $t_2$), or are restricted to being cumulative (between times $1$ and $t_2$), and depending on whether the DP neighboring relation is event-level or the more stringent item-level. We obtain nearly tight upper and lower bounds on the errors of DP algorithms for these problems. En route, we obtain an event-level DP algorithm for estimating, at each time step, the number of distinct items seen over the last $W$ updates with error polylogarithmic in $W$; this answers an open question of Bolot et al. (ICDT 2013).","PeriodicalId":123734,"journal":{"name":"Information Technology Convergence and Services","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133413047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Beyond Worst-Case Budget-Feasible Mechanism Design 超越最坏情况的预算-可行机制设计
Information Technology Convergence and Services Pub Date : 2022-11-16 DOI: 10.48550/arXiv.2211.08711
A. Rubinstein, Junyao Zhao
{"title":"Beyond Worst-Case Budget-Feasible Mechanism Design","authors":"A. Rubinstein, Junyao Zhao","doi":"10.48550/arXiv.2211.08711","DOIUrl":"https://doi.org/10.48550/arXiv.2211.08711","url":null,"abstract":"Motivated by large-market applications such as crowdsourcing, we revisit the problem of budget-feasible mechanism design under a\"small-bidder assumption\". Anari, Goel, and Nikzad (2018) gave a mechanism that has optimal competitive ratio $1-1/e$ on worst-case instances. However, we observe that on many realistic instances, their mechanism is significantly outperformed by a simpler open clock auction by Ensthaler and Giebe (2014), although the open clock auction only achieves competitive ratio $1/2$ in the worst case. Is there a mechanism that gets the best of both worlds, i.e., a mechanism that is worst-case optimal and performs favorably on realistic instances? Our first main result is the design and the analysis of a natural mechanism that gives an affirmative answer to our question above: (i) We prove that on every instance, our mechanism performs at least as good as all uniform mechanisms, including Anari, Goel, and Nikzad's and Ensthaler and Giebe's mechanisms. (ii) Moreover, we empirically evaluate our mechanism on various realistic instances and observe that it beats the worst-case $1-1/e$ competitive ratio by a large margin and compares favorably to both mechanisms mentioned above. Our second main result is more interesting in theory: We show that in the semi-adversarial model of budget-smoothed analysis, where the adversary designs a single worst-case market for a distribution of budgets, our mechanism is optimal among all (including non-uniform) mechanisms; furthermore our mechanism guarantees a strictly better-than-$(1-1/e)$ expected competitive ratio for any non-trivial budget distribution regardless of the market. We complement the positive result with a characterization of the worst-case markets for any given budget distribution and prove a fairly robust hardness result that holds against any budget distribution and any mechanism.","PeriodicalId":123734,"journal":{"name":"Information Technology Convergence and Services","volume":"98 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121014195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Improved Monotonicity Testers via Hypercube Embeddings 通过超立方体嵌入改进单调性测试仪
Information Technology Convergence and Services Pub Date : 2022-11-16 DOI: 10.48550/arXiv.2211.09229
M. Braverman, Subhash Khot, Guy Kindler, Dor Minzer
{"title":"Improved Monotonicity Testers via Hypercube Embeddings","authors":"M. Braverman, Subhash Khot, Guy Kindler, Dor Minzer","doi":"10.48550/arXiv.2211.09229","DOIUrl":"https://doi.org/10.48550/arXiv.2211.09229","url":null,"abstract":"We show improved monotonicity testers for the Boolean hypercube under the $p$-biased measure, as well as over the hypergrid $[m]^n$. Our results are: 1. For any $pin (0,1)$, for the $p$-biased hypercube we show a non-adaptive tester that makes $tilde{O}(sqrt{n}/varepsilon^2)$ queries, accepts monotone functions with probability $1$ and rejects functions that are $varepsilon$-far from monotone with probability at least $2/3$. 2. For all $minmathbb{N}$, we show an $tilde{O}(sqrt{n}m^3/varepsilon^2)$ query monotonicity tester over $[m]^n$. We also establish corresponding directed isoperimetric inequalities in these domains. Previously, the best known tester due to Black, Chakrabarty and Seshadhri had $Omega(n^{5/6})$ query complexity. Our results are optimal up to poly-logarithmic factors and the dependency on $m$. Our proof uses a notion of monotone embeddings of measures into the Boolean hypercube that can be used to reduce the problem of monotonicity testing over an arbitrary product domains to the Boolean cube. The embedding maps a function over a product domain of dimension $n$ into a function over a Boolean cube of a larger dimension $n'$, while preserving its distance from being monotone; an embedding is considered efficient if $n'$ is not much larger than $n$, and we show how to construct efficient embeddings in the above mentioned settings.","PeriodicalId":123734,"journal":{"name":"Information Technology Convergence and Services","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129989907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信