{"title":"Policy Optimization Using Semiparametric Models for Dynamic Pricing","authors":"Jianqing Fan, Yongyi Guo, Mengxin Yu","doi":"10.2139/ssrn.3922825","DOIUrl":null,"url":null,"abstract":"In this paper, we study the contextual dynamic pricing problem where the market value of a product is linear in their observed features plus some market noise. Products are sold one at a time, and only a binary response indicating the success or failure of a sale is observed. Our model setting is similar to \\cite{JN19} except that we expand the demand curve to a semiparametric model and need to learn dynamically both parametric and nonparametric components. We propose a dynamic statistical learning and decision-making policy that combines semiparametric estimation from a generalized linear model with an unknown link and online decision making to minimize regret (maximize revenue). Under mild conditions, we show that for a market noise c.d.f. $F(\\cdot)$ with $m$-th order derivative, our policy achieves a regret upper bound of $\\tilde{\\cO}_{d}(T^{\\frac{2m+1}{4m-1}})$ for $m\\geq 2$, where $T$ is time horizon and $\\tilde{\\cO}_{d}$ is the order that hides logarithmic terms and the dimensionality of feature $d$. The upper bound is further reduced to $\\tilde{\\cO}_{d}(\\sqrt{T})$ if $F$ is super smooth whose Fourier transform decays exponentially. In terms of dependence on the horizon $T$, these upper bounds are close to $\\Omega(\\sqrt{T})$, the lower bound where the market noise distribution belongs to a parametric class. We further generalize these results to the case when the product features are dynamically dependent, satisfying some strong mixing conditions.","PeriodicalId":406435,"journal":{"name":"CompSciRN: Other Machine Learning (Topic)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"CompSciRN: Other Machine Learning (Topic)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3922825","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14
Abstract
In this paper, we study the contextual dynamic pricing problem where the market value of a product is linear in their observed features plus some market noise. Products are sold one at a time, and only a binary response indicating the success or failure of a sale is observed. Our model setting is similar to \cite{JN19} except that we expand the demand curve to a semiparametric model and need to learn dynamically both parametric and nonparametric components. We propose a dynamic statistical learning and decision-making policy that combines semiparametric estimation from a generalized linear model with an unknown link and online decision making to minimize regret (maximize revenue). Under mild conditions, we show that for a market noise c.d.f. $F(\cdot)$ with $m$-th order derivative, our policy achieves a regret upper bound of $\tilde{\cO}_{d}(T^{\frac{2m+1}{4m-1}})$ for $m\geq 2$, where $T$ is time horizon and $\tilde{\cO}_{d}$ is the order that hides logarithmic terms and the dimensionality of feature $d$. The upper bound is further reduced to $\tilde{\cO}_{d}(\sqrt{T})$ if $F$ is super smooth whose Fourier transform decays exponentially. In terms of dependence on the horizon $T$, these upper bounds are close to $\Omega(\sqrt{T})$, the lower bound where the market noise distribution belongs to a parametric class. We further generalize these results to the case when the product features are dynamically dependent, satisfying some strong mixing conditions.