{"title":"将最大熵技术扩展到熵约束","authors":"G. Xiang, V. Kreinovich","doi":"10.1109/NAFIPS.2010.5548264","DOIUrl":null,"url":null,"abstract":"In many practical situations, we have only partial information about the probabilities. In some cases, we have crisp (interval) bounds on the probabilities and/or on the related statistical characteristics. In other situations, we have fuzzy bounds, i.e., different interval bounds with different degrees of certainty. In a situation with uncertainty, we do not know the exact value of the desired characteristic. In such situations, it is desirable to find its worst possible value, its best possible value, and its “typical” value – corresponding to the “most probable” probability distribution. Usually, as such a “typical” distribution, we select the one with the largest value of the entropy. This works perfectly well in usual cases when the information about the distribution consists of the values of moments and other characteristics. For example, if we only know the first and the second moments, then the distribution with the largest entropy if the normal (Gaussian) one. However, in some situations, we know the entropy (= amount of information) of the distribution. In this case, the maximum entropy approach does not work, since all the distributions which are consistent with our knowledge have the exact sam e entropy value. In this paper, we show how the main ideas of the maximum entropy approach can be extended to this case.","PeriodicalId":394892,"journal":{"name":"2010 Annual Meeting of the North American Fuzzy Information Processing Society","volume":"50 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Extending maximum entropy techniques to entropy constraints\",\"authors\":\"G. Xiang, V. Kreinovich\",\"doi\":\"10.1109/NAFIPS.2010.5548264\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In many practical situations, we have only partial information about the probabilities. In some cases, we have crisp (interval) bounds on the probabilities and/or on the related statistical characteristics. In other situations, we have fuzzy bounds, i.e., different interval bounds with different degrees of certainty. In a situation with uncertainty, we do not know the exact value of the desired characteristic. In such situations, it is desirable to find its worst possible value, its best possible value, and its “typical” value – corresponding to the “most probable” probability distribution. Usually, as such a “typical” distribution, we select the one with the largest value of the entropy. This works perfectly well in usual cases when the information about the distribution consists of the values of moments and other characteristics. For example, if we only know the first and the second moments, then the distribution with the largest entropy if the normal (Gaussian) one. However, in some situations, we know the entropy (= amount of information) of the distribution. In this case, the maximum entropy approach does not work, since all the distributions which are consistent with our knowledge have the exact sam e entropy value. In this paper, we show how the main ideas of the maximum entropy approach can be extended to this case.\",\"PeriodicalId\":394892,\"journal\":{\"name\":\"2010 Annual Meeting of the North American Fuzzy Information Processing Society\",\"volume\":\"50 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2010-07-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2010 Annual Meeting of the North American Fuzzy Information Processing Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NAFIPS.2010.5548264\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 Annual Meeting of the North American Fuzzy Information Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NAFIPS.2010.5548264","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Extending maximum entropy techniques to entropy constraints
In many practical situations, we have only partial information about the probabilities. In some cases, we have crisp (interval) bounds on the probabilities and/or on the related statistical characteristics. In other situations, we have fuzzy bounds, i.e., different interval bounds with different degrees of certainty. In a situation with uncertainty, we do not know the exact value of the desired characteristic. In such situations, it is desirable to find its worst possible value, its best possible value, and its “typical” value – corresponding to the “most probable” probability distribution. Usually, as such a “typical” distribution, we select the one with the largest value of the entropy. This works perfectly well in usual cases when the information about the distribution consists of the values of moments and other characteristics. For example, if we only know the first and the second moments, then the distribution with the largest entropy if the normal (Gaussian) one. However, in some situations, we know the entropy (= amount of information) of the distribution. In this case, the maximum entropy approach does not work, since all the distributions which are consistent with our knowledge have the exact sam e entropy value. In this paper, we show how the main ideas of the maximum entropy approach can be extended to this case.