Entropy Estimation: Simulation, Theory and a Case Study

Ioannis Kontoyiannis
{"title":"Entropy Estimation: Simulation, Theory and a Case Study","authors":"Ioannis Kontoyiannis","doi":"10.1109/ITW.2006.1633823","DOIUrl":null,"url":null,"abstract":"We consider the statistical problem of estimating the entropy of finite-alphabet data generated from an unknown stationary process. We examine a series of estimators, including: (1) The standard maximum-likelihood or \"plug-in\" estimator; (2) Four different estimators based on the family of Lempel-Ziv compression algorithms; (3) A different plug-in estimator especially tailored to renewal processes; and (4) The natural estimator derived from the Context-Tree Weighting method (CTW). Some of these estimators are well-known, and some are new. We first summarize numerous theoretical properties of these estimators: Conditions for consistency, estimates of their bias and variance, methods for approximating the estimation error and for obtaining confidence intervals. Several new theoretical results are developed. We show how the theory offers preliminary indications results offer guidelines for tuning the parameters involved in the estimation process. Then we present an extensive simulation study on various types of synthetic data and under various conditions. We compare their performance and comment on the strengths and weaknesses of the various methods. For each estimator, we develop a precise method for calculating the estimation error based on any specific data set. Finally we report the performance of these entropy estimators on the (binary) spike trains of 28 neurons recorded simultaneously for a one-hour period from the primary motor and dorsal premotor cortices of a quietly seated monkey not engaged in a task behavior. Based on joint work with Yun Gao and Elie Bienenstock.","PeriodicalId":293144,"journal":{"name":"2006 IEEE Information Theory Workshop - ITW '06 Punta del Este","volume":"26 6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2006 IEEE Information Theory Workshop - ITW '06 Punta del Este","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITW.2006.1633823","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

We consider the statistical problem of estimating the entropy of finite-alphabet data generated from an unknown stationary process. We examine a series of estimators, including: (1) The standard maximum-likelihood or "plug-in" estimator; (2) Four different estimators based on the family of Lempel-Ziv compression algorithms; (3) A different plug-in estimator especially tailored to renewal processes; and (4) The natural estimator derived from the Context-Tree Weighting method (CTW). Some of these estimators are well-known, and some are new. We first summarize numerous theoretical properties of these estimators: Conditions for consistency, estimates of their bias and variance, methods for approximating the estimation error and for obtaining confidence intervals. Several new theoretical results are developed. We show how the theory offers preliminary indications results offer guidelines for tuning the parameters involved in the estimation process. Then we present an extensive simulation study on various types of synthetic data and under various conditions. We compare their performance and comment on the strengths and weaknesses of the various methods. For each estimator, we develop a precise method for calculating the estimation error based on any specific data set. Finally we report the performance of these entropy estimators on the (binary) spike trains of 28 neurons recorded simultaneously for a one-hour period from the primary motor and dorsal premotor cortices of a quietly seated monkey not engaged in a task behavior. Based on joint work with Yun Gao and Elie Bienenstock.
熵估计:模拟、理论与个案研究
研究由未知平稳过程产生的有限字母数据的熵估计的统计问题。我们研究了一系列估计量,包括:(1)标准最大似然估计量或“插件”估计量;(2)基于Lempel-Ziv压缩算法族的四种不同估计量;(3)针对更新过程量身定制的不同插件估算器;(4)基于上下文树加权法(CTW)的自然估计量。这些估算器中有些是众所周知的,有些是新的。我们首先总结了这些估计器的许多理论性质:一致性的条件,它们的偏差和方差的估计,近似估计误差和获得置信区间的方法。提出了几个新的理论结果。我们展示了该理论如何提供初步指示,结果为调整估计过程中涉及的参数提供了指导方针。然后,我们在各种类型的合成数据和各种条件下进行了广泛的模拟研究。我们比较了它们的性能,并评论了各种方法的优缺点。对于每个估计器,我们开发了一种精确的方法来计算基于任何特定数据集的估计误差。最后,我们报告了这些熵估计器在一个小时内同时记录的28个神经元的(二进制)尖峰序列上的表现,这些神经元来自一只安静坐着的猴子的初级运动和背侧运动前皮层,没有从事任务行为。基于与Yun Gao和Elie Bienenstock的合作。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信