{"title":"基于互信息的期望超额风险下界","authors":"M. B. Dogan, M. Gastpar","doi":"10.1109/ITW48936.2021.9611483","DOIUrl":null,"url":null,"abstract":"The expected excess risk of a learning algorithm is the average suboptimality of using the learning algorithm, relative to the optimal hypothesis in the hypothesis class. In this work, we lower bound the expected excess risk of a learning algorithm using the mutual information between the input and the noisy output of the learning algorithm. The setting we consider is, where the hypothesis class is the set of real numbers and the true risk function has a local strong convexity property. Our main results also lead to asymptotic lower bounds on the expected excess risk, which do not require the knowledge of the local strong convexity constants of the true risk function.","PeriodicalId":325229,"journal":{"name":"2021 IEEE Information Theory Workshop (ITW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Lower Bounds on the Expected Excess Risk Using Mutual Information\",\"authors\":\"M. B. Dogan, M. Gastpar\",\"doi\":\"10.1109/ITW48936.2021.9611483\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The expected excess risk of a learning algorithm is the average suboptimality of using the learning algorithm, relative to the optimal hypothesis in the hypothesis class. In this work, we lower bound the expected excess risk of a learning algorithm using the mutual information between the input and the noisy output of the learning algorithm. The setting we consider is, where the hypothesis class is the set of real numbers and the true risk function has a local strong convexity property. Our main results also lead to asymptotic lower bounds on the expected excess risk, which do not require the knowledge of the local strong convexity constants of the true risk function.\",\"PeriodicalId\":325229,\"journal\":{\"name\":\"2021 IEEE Information Theory Workshop (ITW)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Information Theory Workshop (ITW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ITW48936.2021.9611483\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Information Theory Workshop (ITW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITW48936.2021.9611483","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Lower Bounds on the Expected Excess Risk Using Mutual Information
The expected excess risk of a learning algorithm is the average suboptimality of using the learning algorithm, relative to the optimal hypothesis in the hypothesis class. In this work, we lower bound the expected excess risk of a learning algorithm using the mutual information between the input and the noisy output of the learning algorithm. The setting we consider is, where the hypothesis class is the set of real numbers and the true risk function has a local strong convexity property. Our main results also lead to asymptotic lower bounds on the expected excess risk, which do not require the knowledge of the local strong convexity constants of the true risk function.