Determination of parametric average code length of inaccuracy measure

Arif Habib
{"title":"Determination of parametric average code length of inaccuracy measure","authors":"Arif Habib","doi":"10.15406/BBIJ.2018.07.00219","DOIUrl":null,"url":null,"abstract":"From past three decades, entropy which is branch of statistical sciences has been used to determine the degree of variability, describes how uncertainty should be quantified in a skillful manner for representation. Statistical entropy has some conflicting explanations so that sometimes it measures two complementary conceptions like information and lack of information. Claude Shannon through two outstanding contributions in 1948 and 1949 relates it with positive information. These were followed by a flood of research papers hypothesize upon the possible applications in almost every field such as pure mathematics, semantics, physics, management, thermodynamics, botany, econometrics, operations research, psychology, epidemiological studies, disease management and related disciplines. Information theory has also had an important role in shaping theories of perception, cognition, and neural computation. When the message is readily measurable, we can say that the information is the reduction of uncertainty. But we usually encountered lossy information i.e a part of the transmitted information reaches the destination in a distorted form. In statistical theory of information, certain specialized terms which need to be translated into a measurable form. A source is similar to the space of a random experiment. A finite sequence of characters is called a word in the same way that the sequence of a number of outcomes associated with the repetition of an experiment may be designated as an event. An interesting observation can be made about the entropy of a binary source. Binary coding offers an interesting practical opportunity for encoding.","PeriodicalId":90455,"journal":{"name":"Biometrics & biostatistics international journal","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2018-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biometrics & biostatistics international journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.15406/BBIJ.2018.07.00219","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

From past three decades, entropy which is branch of statistical sciences has been used to determine the degree of variability, describes how uncertainty should be quantified in a skillful manner for representation. Statistical entropy has some conflicting explanations so that sometimes it measures two complementary conceptions like information and lack of information. Claude Shannon through two outstanding contributions in 1948 and 1949 relates it with positive information. These were followed by a flood of research papers hypothesize upon the possible applications in almost every field such as pure mathematics, semantics, physics, management, thermodynamics, botany, econometrics, operations research, psychology, epidemiological studies, disease management and related disciplines. Information theory has also had an important role in shaping theories of perception, cognition, and neural computation. When the message is readily measurable, we can say that the information is the reduction of uncertainty. But we usually encountered lossy information i.e a part of the transmitted information reaches the destination in a distorted form. In statistical theory of information, certain specialized terms which need to be translated into a measurable form. A source is similar to the space of a random experiment. A finite sequence of characters is called a word in the same way that the sequence of a number of outcomes associated with the repetition of an experiment may be designated as an event. An interesting observation can be made about the entropy of a binary source. Binary coding offers an interesting practical opportunity for encoding.
误差测量参数平均码长的确定
在过去的三十年里,熵是统计科学的一个分支,它被用来确定可变性的程度,它描述了不确定性应该如何以一种熟练的方式来量化。统计熵有一些相互矛盾的解释,所以有时它测量两个互补的概念,比如信息和信息缺乏。克劳德·香农在1948年和1949年的两次杰出贡献将其与积极信息联系起来。紧随其后的是大量研究论文,对几乎所有领域的可能应用进行了假设,如纯数学、语义学、物理学、管理学、热力学、植物学、计量经济学、运筹学、心理学、流行病学研究、疾病管理和相关学科。信息论在形成知觉、认知和神经计算理论方面也发挥了重要作用。当信息易于测量时,我们可以说信息是不确定性的减少。但是我们通常会遇到有损信息,即部分传输信息以扭曲的形式到达目的地。在信息统计理论中,需要转换成可测量形式的特定术语。源类似于随机实验的空间。一个有限的字符序列被称为一个单词,就像与重复实验相关的一系列结果可以被称为一个事件一样。我们可以对二元源的熵作一个有趣的观察。二进制编码为编码提供了一个有趣的实践机会。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信