当前位置:
X-MOL 学术
›
arXiv.cs.GT
›
论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Maximizing Welfare with Incentive-Aware Evaluation Mechanisms
arXiv - CS - Computer Science and Game Theory Pub Date : 2020-11-03 , DOI: arxiv-2011.01956 Nika Haghtalab, Nicole Immorlica, Brendan Lucier, Jack Z. Wang
arXiv - CS - Computer Science and Game Theory Pub Date : 2020-11-03 , DOI: arxiv-2011.01956 Nika Haghtalab, Nicole Immorlica, Brendan Lucier, Jack Z. Wang
Motivated by applications such as college admission and insurance rate
determination, we propose an evaluation problem where the inputs are controlled
by strategic individuals who can modify their features at a cost. A learner can
only partially observe the features, and aims to classify individuals with
respect to a quality score. The goal is to design an evaluation mechanism that
maximizes the overall quality score, i.e., welfare, in the population, taking
any strategic updating into account. We further study the algorithmic aspect of
finding the welfare maximizing evaluation mechanism under two specific settings
in our model. When scores are linear and mechanisms use linear scoring rules on
the observable features, we show that the optimal evaluation mechanism is an
appropriate projection of the quality score. When mechanisms must use linear
thresholds, we design a polynomial time algorithm with a (1/4)-approximation
guarantee when the underlying feature distribution is sufficiently smooth and
admits an oracle for finding dense regions. We extend our results to settings
where the prior distribution is unknown and must be learned from samples.
中文翻译:
通过激励感知评估机制最大限度地提高福利
受大学录取和保险费率确定等应用程序的启发,我们提出了一个评估问题,其中输入由可以以一定成本修改其特征的战略个人控制。学习者只能部分观察特征,并旨在根据质量分数对个体进行分类。目标是设计一种评估机制,最大限度地提高总体质量得分,即人口的福利,同时考虑到任何战略更新。我们进一步研究了在我们模型中的两个特定设置下寻找福利最大化评估机制的算法方面。当分数是线性的并且机制对可观察特征使用线性评分规则时,我们表明最佳评估机制是质量分数的适当投影。当机制必须使用线性阈值时,我们设计了一个多项式时间算法,当底层特征分布足够平滑并且允许使用预言机来寻找密集区域时,我们会设计一个具有 (1/4) 近似保证的多项式时间算法。我们将结果扩展到先验分布未知且必须从样本中学习的设置。
更新日期:2020-11-05
中文翻译:
通过激励感知评估机制最大限度地提高福利
受大学录取和保险费率确定等应用程序的启发,我们提出了一个评估问题,其中输入由可以以一定成本修改其特征的战略个人控制。学习者只能部分观察特征,并旨在根据质量分数对个体进行分类。目标是设计一种评估机制,最大限度地提高总体质量得分,即人口的福利,同时考虑到任何战略更新。我们进一步研究了在我们模型中的两个特定设置下寻找福利最大化评估机制的算法方面。当分数是线性的并且机制对可观察特征使用线性评分规则时,我们表明最佳评估机制是质量分数的适当投影。当机制必须使用线性阈值时,我们设计了一个多项式时间算法,当底层特征分布足够平滑并且允许使用预言机来寻找密集区域时,我们会设计一个具有 (1/4) 近似保证的多项式时间算法。我们将结果扩展到先验分布未知且必须从样本中学习的设置。