Y. Mansour, Aleksandrs Slivkins, Vasilis Syrgkanis
{"title":"贝叶斯激励兼容盗匪勘探","authors":"Y. Mansour, Aleksandrs Slivkins, Vasilis Syrgkanis","doi":"10.1145/2764468.2764508","DOIUrl":null,"url":null,"abstract":"Individual decision-makers consume information revealed by the previous decision makers, and produce information that may help in future decision makers. This phenomenon is common in a wide range of scenarios in the Internet economy, as well as elsewhere, such as medical decisions. Each decision maker when required to select an action, would individually prefer to exploit, select the highest expected reward action conditional on her information. At the same time, each decision maker would prefer previous decision makers to explore, producing information about the rewards of various actions. A social planner, by means of carefully designed information disclosure, can incentivize the agents to balance the exploration and exploitation, and maximize social welfare. We formulate this problem as a multi-arm bandit problem (and various generalizations thereof) under incentive-compatibility constraints induced by agents' Bayesian priors. We design an incentive-compatible bandit algorithm for the social planner with asymptotically optimal regret. Further, we provide a black-box reduction from an arbitrary multi-arm bandit algorithm to an incentive-compatible one, with only a constant multiplicative increase in regret. This reduction works for very general bandit settings, even ones that incorporate contexts and arbitrary partial feedback.","PeriodicalId":376992,"journal":{"name":"Proceedings of the Sixteenth ACM Conference on Economics and Computation","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"115","resultStr":"{\"title\":\"Bayesian Incentive-Compatible Bandit Exploration\",\"authors\":\"Y. Mansour, Aleksandrs Slivkins, Vasilis Syrgkanis\",\"doi\":\"10.1145/2764468.2764508\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Individual decision-makers consume information revealed by the previous decision makers, and produce information that may help in future decision makers. This phenomenon is common in a wide range of scenarios in the Internet economy, as well as elsewhere, such as medical decisions. Each decision maker when required to select an action, would individually prefer to exploit, select the highest expected reward action conditional on her information. At the same time, each decision maker would prefer previous decision makers to explore, producing information about the rewards of various actions. A social planner, by means of carefully designed information disclosure, can incentivize the agents to balance the exploration and exploitation, and maximize social welfare. We formulate this problem as a multi-arm bandit problem (and various generalizations thereof) under incentive-compatibility constraints induced by agents' Bayesian priors. We design an incentive-compatible bandit algorithm for the social planner with asymptotically optimal regret. Further, we provide a black-box reduction from an arbitrary multi-arm bandit algorithm to an incentive-compatible one, with only a constant multiplicative increase in regret. This reduction works for very general bandit settings, even ones that incorporate contexts and arbitrary partial feedback.\",\"PeriodicalId\":376992,\"journal\":{\"name\":\"Proceedings of the Sixteenth ACM Conference on Economics and Computation\",\"volume\":\"39 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-02-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"115\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Sixteenth ACM Conference on Economics and Computation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2764468.2764508\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Sixteenth ACM Conference on Economics and Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2764468.2764508","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Individual decision-makers consume information revealed by the previous decision makers, and produce information that may help in future decision makers. This phenomenon is common in a wide range of scenarios in the Internet economy, as well as elsewhere, such as medical decisions. Each decision maker when required to select an action, would individually prefer to exploit, select the highest expected reward action conditional on her information. At the same time, each decision maker would prefer previous decision makers to explore, producing information about the rewards of various actions. A social planner, by means of carefully designed information disclosure, can incentivize the agents to balance the exploration and exploitation, and maximize social welfare. We formulate this problem as a multi-arm bandit problem (and various generalizations thereof) under incentive-compatibility constraints induced by agents' Bayesian priors. We design an incentive-compatible bandit algorithm for the social planner with asymptotically optimal regret. Further, we provide a black-box reduction from an arbitrary multi-arm bandit algorithm to an incentive-compatible one, with only a constant multiplicative increase in regret. This reduction works for very general bandit settings, even ones that incorporate contexts and arbitrary partial feedback.