{"title":"Exploiting memory in dynamic average consensus","authors":"Bryan Van Scoy, R. Freeman, K. Lynch","doi":"10.1109/ALLERTON.2015.7447013","DOIUrl":null,"url":null,"abstract":"In the discrete-time average consensus problem, each agent in a network has a local input and communicates with neighboring agents to calculate the global average of all agent inputs. We analyze diffusion-like algorithms where each agent maintains an internal state which it updates at each time step using its local input together with information it receives from neighboring agents. The agent's estimate of the global average input is then a local function of its internal state. Local memory on each agent can be used to enhance the performance of average consensus estimators in several ways. Agents can use memory to store both internal state variables as well as intermediate diffusion calculations within each time step. We exploit memory to design two types of estimators. First, we design feedback estimators which track constant input signals with zero steady-state error. Such estimators produce estimates that converge exponentially to the global average, and we consider the cost of an estimator to be the largest time constant of the exponential decay of its estimation errors. However, we measure time using normalized units of communicated real variables per agent, so that estimators requiring more communication per time step are potentially costlier even if they converge in fewer time steps. We then show that a certain estimator having two internal state variables and one diffusion calculation per time step achieves the minimal cost over all graphs and all estimators with one or two states no matter how many intermediate diffusion calculations are stored. Second, we design a feedforward estimator which tracks time-varying signals whose frequencies lie below some cut-off frequency. The steady-state error is finite, but can be made arbitrarily small using enough diffusion calculations per time step.","PeriodicalId":112948,"journal":{"name":"2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ALLERTON.2015.7447013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
In the discrete-time average consensus problem, each agent in a network has a local input and communicates with neighboring agents to calculate the global average of all agent inputs. We analyze diffusion-like algorithms where each agent maintains an internal state which it updates at each time step using its local input together with information it receives from neighboring agents. The agent's estimate of the global average input is then a local function of its internal state. Local memory on each agent can be used to enhance the performance of average consensus estimators in several ways. Agents can use memory to store both internal state variables as well as intermediate diffusion calculations within each time step. We exploit memory to design two types of estimators. First, we design feedback estimators which track constant input signals with zero steady-state error. Such estimators produce estimates that converge exponentially to the global average, and we consider the cost of an estimator to be the largest time constant of the exponential decay of its estimation errors. However, we measure time using normalized units of communicated real variables per agent, so that estimators requiring more communication per time step are potentially costlier even if they converge in fewer time steps. We then show that a certain estimator having two internal state variables and one diffusion calculation per time step achieves the minimal cost over all graphs and all estimators with one or two states no matter how many intermediate diffusion calculations are stored. Second, we design a feedforward estimator which tracks time-varying signals whose frequencies lie below some cut-off frequency. The steady-state error is finite, but can be made arbitrarily small using enough diffusion calculations per time step.