2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)最新文献

筛选
英文 中文
Capacity of Private Linear Computation for Coded Databases 编码数据库的私有线性计算能力
2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton) Pub Date : 2018-10-01 DOI: 10.1109/ALLERTON.2018.8636039
Sarah A. Obead, Hsuan-Yin Lin, E. Rosnes, J. Kliewer
{"title":"Capacity of Private Linear Computation for Coded Databases","authors":"Sarah A. Obead, Hsuan-Yin Lin, E. Rosnes, J. Kliewer","doi":"10.1109/ALLERTON.2018.8636039","DOIUrl":"https://doi.org/10.1109/ALLERTON.2018.8636039","url":null,"abstract":"We consider the problem of private linear computation (PLC) in a distributed storage system. In PLC, a user wishes to compute a linear combination of f messages stored in noncolluding databases while revealing no information about the coefficients of the desired linear combination to the databases. In extension of our previous work we employ linear codes to encode the information on the databases. We show that the PLC capacity, which is the ratio of the desired linear function size and the total amount of downloaded information, matches the maximum distance separable (MDS) coded capacity of private information retrieval for a large class of linear codes that includes MDS codes. In particular, the proposed converse is valid for any number of messages and linear combinations, and the capacity expression depends on the rank of the coefficient matrix obtained from all linear combinations.","PeriodicalId":299280,"journal":{"name":"2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127472629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Quantized Dominant Strategy Mechanisms with Constrained Marginal Valuations 边际估值受限的量化优势策略机制
2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton) Pub Date : 2018-10-01 DOI: 10.1109/ALLERTON.2018.8635997
Hao Ge, R. Berry
{"title":"Quantized Dominant Strategy Mechanisms with Constrained Marginal Valuations","authors":"Hao Ge, R. Berry","doi":"10.1109/ALLERTON.2018.8635997","DOIUrl":"https://doi.org/10.1109/ALLERTON.2018.8635997","url":null,"abstract":"We address the problem of designing efficient allocation mechanisms for a divisible resource, which is a fundamental problem in many networked systems. One milestone in mechanism design is the well-known Vickrey-Clarke-Groves (VCG) mechanism, where there exists a strictly dominant strategy for each agent. However, VCG mechanisms can require an excessive amount of communication making it impractical in some large networked systems. Alternative approaches have been studied that relax the incentive properties of VCG to limit communication. Alternatively, in prior work we considered the use of quantization as a way to reduce communication and maintain dominant strategy incentive compatibility, albeit with a loss of efficiency. Our prior work bounded this efficiency loss allowing for arbitrary concave increasing agent utilities. In this paper, we first refine this analysis when bounds on the marginal utility of an agent are known. In addition to quantizing the resource, we also study mechanisms that quantize the bids an agent can submit and again bound the overall efficiency loss given constraints on the agent’s marginal valuations.","PeriodicalId":299280,"journal":{"name":"2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"147 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123270984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The Generalized Lasso for Sub-gaussian Observations with Dithered Quantization 具有抖动量化的亚高斯观测的广义Lasso
2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton) Pub Date : 2018-10-01 DOI: 10.1109/ALLERTON.2018.8636051
Christos Thrampoulidis, A. Rawat
{"title":"The Generalized Lasso for Sub-gaussian Observations with Dithered Quantization","authors":"Christos Thrampoulidis, A. Rawat","doi":"10.1109/ALLERTON.2018.8636051","DOIUrl":"https://doi.org/10.1109/ALLERTON.2018.8636051","url":null,"abstract":"In the problem of structured signal recovery from high-dimensional linear observations, it is commonly assumed that full-precision measurements are available. Under this assumption, the recovery performance of the popular Generalized Lasso (G-Lasso) is by now well-established. In this paper, we extend these types of results to the practically relevant settings with quantized measurements. We study two extremes of the quantization schemes, namely, uniform and one-bit quantization; the former imposes no limit on the number of quantization bits, while the second only allows for one bit. In the presence of a uniform dithering signal and when measurement vectors are sub-gaussian, we show that the same algorithm (i.e., the G-Lasso) has favorable recovery guarantees for both uniform and one-bit quantization schemes. Our theoretical results, shed light on the appropriate choice of the range of values of the dithering signal and accurately capture the error dependence on the problem parameters. For example, our error analysis shows that the G-Lasso with one-bit uniformly dithered measurements leads to only a logarithmic rate loss compared to the full- precision measurements.","PeriodicalId":299280,"journal":{"name":"2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123763270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Anytime Stochastic Gradient Descent: A Time to Hear from all the Workers 随时随机梯度下降:听取所有工人意见的时间
2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton) Pub Date : 2018-10-01 DOI: 10.1109/ALLERTON.2018.8635903
Nuwan S. Ferdinand, S. Draper
{"title":"Anytime Stochastic Gradient Descent: A Time to Hear from all the Workers","authors":"Nuwan S. Ferdinand, S. Draper","doi":"10.1109/ALLERTON.2018.8635903","DOIUrl":"https://doi.org/10.1109/ALLERTON.2018.8635903","url":null,"abstract":"In this paper, we focus on approaches to parallelizing stochastic gradient descent (SGD) wherein data is farmed out to a set of workers, the results of which, after a number of updates, are then combined at a central master node. Although such synchronized SGD approaches parallelize well in idealized computing environments, they often fail to realize their promised computational acceleration in practical settings. One cause is slow workers, termed stragglers, who can cause the fusion step at the master node to stall, which greatly slowing convergence. In many straggler mitigation approaches work completed by these nodes, while only partial, is discarded completely. In this paper, we propose an approach to parallelizing synchronous SGD that exploits the work completed by all workers. The central idea is to fix the computation time of each worker and then to combine distinct contributions of all workers. We provide a convergence analysis and optimize the combination function. Our numerical results demonstrate an improvement of several factors of magnitude in comparison to existing methods.","PeriodicalId":299280,"journal":{"name":"2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122706523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Local Weak Convergence Based Analysis of a New Graph Model 基于局部弱收敛的新图模型分析
2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton) Pub Date : 2018-10-01 DOI: 10.1109/ALLERTON.2018.8635966
Mehrdad Moharrami, V. Subramanian, M. Liu, R. Sundaresan
{"title":"Local Weak Convergence Based Analysis of a New Graph Model","authors":"Mehrdad Moharrami, V. Subramanian, M. Liu, R. Sundaresan","doi":"10.1109/ALLERTON.2018.8635966","DOIUrl":"https://doi.org/10.1109/ALLERTON.2018.8635966","url":null,"abstract":"Different random graph models have been proposed as an attempt to model individuals’ behavior. Each of these models proposes a unique way to construct a random graph that covers some properties of the real-world networks. In a recent work [4], the proposed model tries to capture the self-optimizing behavior of the individuals in which the links are made based on the cost/benefit of the connection. In this paper, we analyze the asymptotics of this graph model. We prove the model locally weakly converges [1] to a rooted tree associated with a branching process which we named Erlang Weighted Tree(EWT) and analyze the main properties of the EWT.","PeriodicalId":299280,"journal":{"name":"2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128344671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Minimax Optimal Sequential Tests for Multiple Hypotheses 多假设的极大极小最优序贯检验
2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton) Pub Date : 2018-10-01 DOI: 10.1109/ALLERTON.2018.8635956
Michael Fauss, A. Zoubir, H. Poor
{"title":"Minimax Optimal Sequential Tests for Multiple Hypotheses","authors":"Michael Fauss, A. Zoubir, H. Poor","doi":"10.1109/ALLERTON.2018.8635956","DOIUrl":"https://doi.org/10.1109/ALLERTON.2018.8635956","url":null,"abstract":"Statistical hypothesis tests are referred to as robust if they are insensitive to small, random deviations from the underlying model. For two hypotheses and fixed sample sizes, the robust testing is well studied and understood. However, few results exist for the case in which the number of samples is variable (i.e., sequential testing) and the number of hypotheses is larger than two (i.e., multiple hypothesis testing). This paper outlines a theory of minimax optimal sequential tests for multiple hypotheses under general distributional uncertainty. It is shown that, in analogy to the fixed sample size case, the minimax solution is an optimal test for the least favorable distributions, i.e., a test that optimally separates the most similar feasible distributions. The joint similarity of multiple distributions is shown to be determined by a weighted f-dissimilarity, whose corresponding function is given by the unique solution of a nonlinear integral equation and whose weights are given by the likelihood ratios of the past samples. As a consequence, the least favorable distributions depend on the past observations and the underlying random process becomes a Markov-process whose state variable coincides with the test statistic.","PeriodicalId":299280,"journal":{"name":"2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128572696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Breaking the Limits of Subspace Inference 突破子空间推理的极限
2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton) Pub Date : 2018-10-01 DOI: 10.1109/ALLERTON.2018.8635999
Claudia R. Solís-Lemus, Daniel L. Pimentel-Alarcón
{"title":"Breaking the Limits of Subspace Inference","authors":"Claudia R. Solís-Lemus, Daniel L. Pimentel-Alarcón","doi":"10.1109/ALLERTON.2018.8635999","DOIUrl":"https://doi.org/10.1109/ALLERTON.2018.8635999","url":null,"abstract":"Inferring low-dimensional subspaces that describe high-dimensional, highly incomplete datasets has become a routinely procedure in modern data science. This paper is about a curious phenomenon related to the amount of information required to estimate a subspace. On one hand, it has been shown that information-theoretically, data in $mathbb {R}^{mathrm {d}}$ must be observed on at least $ell =mathrm {r}+1$ coordinates to uniquely identify an r-dimensional subspace that approximates it. On the other hand, it is well- known that the subspace containing a dataset can be estimated through its sample covariance matrix, which only requires observing 2 coordinates per datapoint (regardless of $mathrm {r}!$). At first glance, this may seem to contradict the information-theoretic bound. The key lies in the subtle difference between identifiability (uniqueness) and estimation (most probable). It is true that if we only observed $ell leq mathrm {r}$ coordinates per datapoint, there will be infinitely many r-dimensional subspaces that perfectly agree with the observations. However, some subspaces may be more likely than others, which are revealed by the sample covariance. This raises several fundamental questions: what are the algebraic relationships hidden in 2 coordinates that allow estimating an r-dimensional subspace? Moreover, are $ell = 2$ coordinates per datapoint necessary for estimation, or is it possible with only $ell =1$? In this paper we show that under certain assumptions, it is possible to estimate some subspaces up to finite choice with as few as $ell =1$ entry per column. This paper raises the question of whether there exist other subspace estimation methods that allow $ell leq mathrm {r}$ coordinates per datapoint, and that are more efficient than the sample covariance, which converges slowly in the number of data points n.","PeriodicalId":299280,"journal":{"name":"2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124247990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Convergence of Distributed Subgradient Methods under Quantization 量化下分布次梯度方法的收敛性
2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton) Pub Date : 2018-10-01 DOI: 10.1109/ALLERTON.2018.8636036
T. Doan, S. T. Maguluri, J. Romberg
{"title":"On the Convergence of Distributed Subgradient Methods under Quantization","authors":"T. Doan, S. T. Maguluri, J. Romberg","doi":"10.1109/ALLERTON.2018.8636036","DOIUrl":"https://doi.org/10.1109/ALLERTON.2018.8636036","url":null,"abstract":"Motivated by various applications in wireless sensor networks and edge computing, we study distributed optimization problems over a network of nodes, where the goal is to optimize a global objective function composed of a sum of local functions. In these problems, due to the large scale of the network, both computation and communication must be implemented locally resulting in the need for distributed algorithms. In addition, the algorithms should be efficient enough to tolerate the limitation of computing resources, memory capacity, and communication bandwidth shared between the nodes. To cope with such limitations, we consider in this paper distributed subgradient methods under quantization. Our main contribution is to provide a sufficient condition for the sequence of quantization levels, which guarantees the convergence of distributed subgradient methods. Our results, while complementing existing results, suggest that distributed subgradient methods achieve desired convergence properties even under quantization, as long as the quantization levels become finer and finer with a proper rate. We also provide numerical simulations to compare the convergence properties of such methods with and without quantization for solving the well-known least square problems over networks.","PeriodicalId":299280,"journal":{"name":"2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124515052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Stability of Dynamic Feedback optimization with Applications to Power Systems 动态反馈优化稳定性及其在电力系统中的应用
2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton) Pub Date : 2018-10-01 DOI: 10.1109/ALLERTON.2018.8635640
Sandeep Menta, Adrian Hauswirth, S. Bolognani, G. Hug, F. Dörfler
{"title":"Stability of Dynamic Feedback optimization with Applications to Power Systems","authors":"Sandeep Menta, Adrian Hauswirth, S. Bolognani, G. Hug, F. Dörfler","doi":"10.1109/ALLERTON.2018.8635640","DOIUrl":"https://doi.org/10.1109/ALLERTON.2018.8635640","url":null,"abstract":"We consider the problem of optimizing the steady state of a dynamical system in closed loop. Conventionally, the design of feedback optimization control laws assumes that the system is stationary. However, in reality, the dynamics of the (slow) iterative optimization routines can interfere with the (fast) system dynamics. We provide a study of the stability and convergence of these feedback optimization setups in closed loop with the underlying plant, via a custom-tailored singular perturbation analysis result. Our study is particularly geared towards applications in power systems and the question whether recently developed online optimization schemes can be deployed without jeopardizing dynamic system stability.","PeriodicalId":299280,"journal":{"name":"2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127835893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
A Stochastic Expectation-Maximization Approach to Shuffled Linear Regression 洗牌线性回归的随机期望最大化方法
2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton) Pub Date : 2018-10-01 DOI: 10.1109/ALLERTON.2018.8635907
Abubakar Abid, James Y. Zou
{"title":"A Stochastic Expectation-Maximization Approach to Shuffled Linear Regression","authors":"Abubakar Abid, James Y. Zou","doi":"10.1109/ALLERTON.2018.8635907","DOIUrl":"https://doi.org/10.1109/ALLERTON.2018.8635907","url":null,"abstract":"We consider the problem of inference in a linear regression model in which the relative ordering of the input features and output labels is not known. Such datasets naturally arise from experiments in which the samples are shuffled or permuted during the protocol. In this work, we propose a framework that treats the unknown permutation as a latent variable. We maximize the likelihood of observations using a stochastic expectation-maximization (EM) approach. We compare this to the dominant approach in the literature, which corresponds to hard EM in our framework. We show on synthetic data that the stochastic EM algorithm we develop has several advantages, including lower parameter error, less sensitivity to the choice of initialization, and significantly better performance on datasets that are only partially shuffled. We conclude by performing two experiments on real datasets that have been partially shuffled, in which we show that the stochastic EM algorithm can recover the weights with modest error.","PeriodicalId":299280,"journal":{"name":"2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125717543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信