当前位置: X-MOL 学术arXiv.cs.GT › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning to Bid in Contextual First Price Auctions
arXiv - CS - Computer Science and Game Theory Pub Date : 2021-09-07 , DOI: arxiv-2109.03173
Ashwinkumar Badanidiyuru, Zhe Feng, Guru Guruganesh

In this paper, we investigate the problem about how to bid in repeated contextual first price auctions. We consider a single bidder (learner) who repeatedly bids in the first price auctions: at each time $t$, the learner observes a context $x_t\in \mathbb{R}^d$ and decides the bid based on historical information and $x_t$. We assume a structured linear model of the maximum bid of all the others $m_t = \alpha_0\cdot x_t + z_t$, where $\alpha_0\in \mathbb{R}^d$ is unknown to the learner and $z_t$ is randomly sampled from a noise distribution $\mathcal{F}$ with log-concave density function $f$. We consider both \emph{binary feedback} (the learner can only observe whether she wins or not) and \emph{full information feedback} (the learner can observe $m_t$) at the end of each time $t$. For binary feedback, when the noise distribution $\mathcal{F}$ is known, we propose a bidding algorithm, by using maximum likelihood estimation (MLE) method to achieve at most $\widetilde{O}(\sqrt{\log(d) T})$ regret. Moreover, we generalize this algorithm to the setting with binary feedback and the noise distribution is unknown but belongs to a parametrized family of distributions. For the full information feedback with \emph{unknown} noise distribution, we provide an algorithm that achieves regret at most $\widetilde{O}(\sqrt{dT})$. Our approach combines an estimator for log-concave density functions and then MLE method to learn the noise distribution $\mathcal{F}$ and linear weight $\alpha_0$ simultaneously. We also provide a lower bound result such that any bidding policy in a broad class must achieve regret at least $\Omega(\sqrt{T})$, even when the learner receives the full information feedback and $\mathcal{F}$ is known.

中文翻译:

学习在上下文首价拍卖中出价

在本文中,我们研究了如何在重复的上下文首价拍卖中出价的问题。我们考虑在第一次价格拍卖中重复出价的单个出价者(学习者):在每个时间 $t$,学习者观察上下文 $x_t\in\mathbb{R}^d$ 并根据历史信息和$x_t$。我们假设所有其他人的最高出价的结构化线性模型 $m_t = \alpha_0\cdot x_t + z_t$,其中 $\alpha_0\in \mathbb{R}^d$ 对学习者来说是未知的,而 $z_t$ 是从噪声分布 $\mathcal{F}$ 中随机采样,具有对数凹密度函数 $f$。我们在每次$t$结束时同时考虑\emph{二元反馈}(学习者只能观察她是否获胜)和\emph{完整信息反馈}(学习者可以观察到$m_t$)。对于二元反馈,当噪声分布 $\mathcal{F}$ 已知时,我们提出了一种投标算法,通过使用最大似然估计(MLE)方法来实现最多 $\widetilde{O}(\sqrt{\log(d) T} )$ 遗憾。此外,我们将此算法推广到具有二进制反馈的设置,噪声分布未知但属于参数化分布族。对于\emph{unknown} 噪声分布的完整信息反馈,我们提供了一种算法,最多可实现$\widetilde{O}(\sqrt{dT})$ 的遗憾。我们的方法结合了对数凹密度函数的估计器和 MLE 方法来同时学习噪声分布 $\mathcal{F}$ 和线性权重 $\alpha_0$。我们还提供了一个下限结果,使得大类中的任何投标策略必须至少达到 $\Omega(\sqrt{T})$,
更新日期:2021-09-08
down
wechat
bug