{"title":"信息瓶颈函数的字母大小边界","authors":"C. Hirche, A. Winter","doi":"10.1109/ISIT44484.2020.9174416","DOIUrl":null,"url":null,"abstract":"The information bottleneck function gives a measure of optimal preservation of correlation between some random variable X and some side information Y while compressing X into a new random variable W with bounded remaining correlation to X. As such, the information bottleneck has found many natural applications in machine learning, coding and video compression. The main objective in order to calculate the information bottleneck is to find the optimal representation on W. This could in principle be arbitrarily complicated, but fortunately it is known that the cardinality of W can be restricted as $|\\mathcal{W}| \\leq |\\mathcal{X}| + 1$ which makes the calculation possible for finite $|\\mathcal{X}|$. Now, for many practical applications, e.g. in machine learning, X represents a potentially very large data space, while Y is from a comparably small set of labels. This raises the question whether the known cardinality bound can be improved in such situations. We show that the information bottleneck function can always be approximated up to an error $\\delta (\\varepsilon,\\;|\\mathcal{Y}|)$ with a cardinality $|\\mathcal{W}| \\leq f( \\in,\\;|\\mathcal{Y}|)$, for explicitly given functions δ and f of an approximation parameter ϵ > 0 and the cardinality of $\\mathcal{Y}$.Finally, we generalize the known cardinality boundsY to the case were some of the random variables represent quantum information.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"An Alphabet-Size Bound for the Information Bottleneck Function\",\"authors\":\"C. Hirche, A. Winter\",\"doi\":\"10.1109/ISIT44484.2020.9174416\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The information bottleneck function gives a measure of optimal preservation of correlation between some random variable X and some side information Y while compressing X into a new random variable W with bounded remaining correlation to X. As such, the information bottleneck has found many natural applications in machine learning, coding and video compression. The main objective in order to calculate the information bottleneck is to find the optimal representation on W. This could in principle be arbitrarily complicated, but fortunately it is known that the cardinality of W can be restricted as $|\\\\mathcal{W}| \\\\leq |\\\\mathcal{X}| + 1$ which makes the calculation possible for finite $|\\\\mathcal{X}|$. Now, for many practical applications, e.g. in machine learning, X represents a potentially very large data space, while Y is from a comparably small set of labels. This raises the question whether the known cardinality bound can be improved in such situations. We show that the information bottleneck function can always be approximated up to an error $\\\\delta (\\\\varepsilon,\\\\;|\\\\mathcal{Y}|)$ with a cardinality $|\\\\mathcal{W}| \\\\leq f( \\\\in,\\\\;|\\\\mathcal{Y}|)$, for explicitly given functions δ and f of an approximation parameter ϵ > 0 and the cardinality of $\\\\mathcal{Y}$.Finally, we generalize the known cardinality boundsY to the case were some of the random variables represent quantum information.\",\"PeriodicalId\":159311,\"journal\":{\"name\":\"2020 IEEE International Symposium on Information Theory (ISIT)\",\"volume\":\"35 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE International Symposium on Information Theory (ISIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISIT44484.2020.9174416\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Symposium on Information Theory (ISIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISIT44484.2020.9174416","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An Alphabet-Size Bound for the Information Bottleneck Function
The information bottleneck function gives a measure of optimal preservation of correlation between some random variable X and some side information Y while compressing X into a new random variable W with bounded remaining correlation to X. As such, the information bottleneck has found many natural applications in machine learning, coding and video compression. The main objective in order to calculate the information bottleneck is to find the optimal representation on W. This could in principle be arbitrarily complicated, but fortunately it is known that the cardinality of W can be restricted as $|\mathcal{W}| \leq |\mathcal{X}| + 1$ which makes the calculation possible for finite $|\mathcal{X}|$. Now, for many practical applications, e.g. in machine learning, X represents a potentially very large data space, while Y is from a comparably small set of labels. This raises the question whether the known cardinality bound can be improved in such situations. We show that the information bottleneck function can always be approximated up to an error $\delta (\varepsilon,\;|\mathcal{Y}|)$ with a cardinality $|\mathcal{W}| \leq f( \in,\;|\mathcal{Y}|)$, for explicitly given functions δ and f of an approximation parameter ϵ > 0 and the cardinality of $\mathcal{Y}$.Finally, we generalize the known cardinality boundsY to the case were some of the random variables represent quantum information.