Exploiting memory in dynamic average consensus

Bryan Van Scoy, R. Freeman, K. Lynch
{"title":"Exploiting memory in dynamic average consensus","authors":"Bryan Van Scoy, R. Freeman, K. Lynch","doi":"10.1109/ALLERTON.2015.7447013","DOIUrl":null,"url":null,"abstract":"In the discrete-time average consensus problem, each agent in a network has a local input and communicates with neighboring agents to calculate the global average of all agent inputs. We analyze diffusion-like algorithms where each agent maintains an internal state which it updates at each time step using its local input together with information it receives from neighboring agents. The agent's estimate of the global average input is then a local function of its internal state. Local memory on each agent can be used to enhance the performance of average consensus estimators in several ways. Agents can use memory to store both internal state variables as well as intermediate diffusion calculations within each time step. We exploit memory to design two types of estimators. First, we design feedback estimators which track constant input signals with zero steady-state error. Such estimators produce estimates that converge exponentially to the global average, and we consider the cost of an estimator to be the largest time constant of the exponential decay of its estimation errors. However, we measure time using normalized units of communicated real variables per agent, so that estimators requiring more communication per time step are potentially costlier even if they converge in fewer time steps. We then show that a certain estimator having two internal state variables and one diffusion calculation per time step achieves the minimal cost over all graphs and all estimators with one or two states no matter how many intermediate diffusion calculations are stored. Second, we design a feedforward estimator which tracks time-varying signals whose frequencies lie below some cut-off frequency. The steady-state error is finite, but can be made arbitrarily small using enough diffusion calculations per time step.","PeriodicalId":112948,"journal":{"name":"2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ALLERTON.2015.7447013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

In the discrete-time average consensus problem, each agent in a network has a local input and communicates with neighboring agents to calculate the global average of all agent inputs. We analyze diffusion-like algorithms where each agent maintains an internal state which it updates at each time step using its local input together with information it receives from neighboring agents. The agent's estimate of the global average input is then a local function of its internal state. Local memory on each agent can be used to enhance the performance of average consensus estimators in several ways. Agents can use memory to store both internal state variables as well as intermediate diffusion calculations within each time step. We exploit memory to design two types of estimators. First, we design feedback estimators which track constant input signals with zero steady-state error. Such estimators produce estimates that converge exponentially to the global average, and we consider the cost of an estimator to be the largest time constant of the exponential decay of its estimation errors. However, we measure time using normalized units of communicated real variables per agent, so that estimators requiring more communication per time step are potentially costlier even if they converge in fewer time steps. We then show that a certain estimator having two internal state variables and one diffusion calculation per time step achieves the minimal cost over all graphs and all estimators with one or two states no matter how many intermediate diffusion calculations are stored. Second, we design a feedforward estimator which tracks time-varying signals whose frequencies lie below some cut-off frequency. The steady-state error is finite, but can be made arbitrarily small using enough diffusion calculations per time step.
利用动态平均共识的记忆
在离散时间平均共识问题中,网络中的每个智能体都有一个局部输入,并与相邻的智能体通信,计算所有智能体输入的全局平均值。我们分析了类似扩散的算法,其中每个代理保持一个内部状态,并在每个时间步使用其本地输入和从相邻代理接收的信息进行更新。代理对全局平均输入的估计是其内部状态的局部函数。每个代理上的本地内存可以通过几种方式来增强平均一致性估计器的性能。代理可以使用内存来存储内部状态变量以及每个时间步内的中间扩散计算。我们利用内存设计两种类型的估计器。首先,我们设计了反馈估计器,跟踪恒定输入信号,稳态误差为零。这样的估计器产生的估计是指数收敛到全球平均,我们认为估计器的成本是其估计误差指数衰减的最大时间常数。然而,我们使用每个代理的通信真实变量的标准化单位来测量时间,因此每个时间步需要更多通信的估计器可能更昂贵,即使它们在更少的时间步内收敛。然后,我们证明了无论存储多少中间扩散计算,具有两个内部状态变量和每个时间步一个扩散计算的某个估计器在所有图和所有具有一个或两个状态的估计器上实现最小代价。其次,我们设计了一个前馈估计器来跟踪频率低于某个截止频率的时变信号。稳态误差是有限的,但可以使任意小使用足够的扩散计算每个时间步。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信