当前位置: X-MOL 学术arXiv.cs.CC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
When Is Amplification Necessary for Composition in Randomized Query Complexity?
arXiv - CS - Computational Complexity Pub Date : 2020-06-19 , DOI: arxiv-2006.10957
Shalev Ben-David, Mika G\"o\"os, Robin Kothari, Thomas Watson

Suppose we have randomized decision trees for an outer function $f$ and an inner function $g$. The natural approach for obtaining a randomized decision tree for the composed function $(f\circ g^n)(x^1,\ldots,x^n)=f(g(x^1),\ldots,g(x^n))$ involves amplifying the success probability of the decision tree for $g$, so that a union bound can be used to bound the error probability over all the coordinates. The amplification introduces a logarithmic factor cost overhead. We study the question: When is this log factor necessary? We show that when the outer function is parity or majority, the log factor can be necessary, even for models that are more powerful than plain randomized decision trees. Our results are related to, but qualitatively strengthen in various ways, known results about decision trees with noisy inputs.

中文翻译:

随机查询复杂性中的组合何时需要放大?

假设我们有一个外部函数 $f$ 和一个内部函数 $g$ 的随机决策树。获得组合函数的随机决策树的自然方法 $(f\circ g^n)(x^1,\ldots,x^n)=f(g(x^1),\ldots,g(x ^n))$ 涉及为 $g$ 放大决策树的成功概率,以便联合边界可用于限制所有坐标上的错误概率。放大引入了对数因子成本开销。我们研究这个问题:这个对数因子什么时候需要?我们表明,当外部函数是奇偶或多数时,对数因子可能是必要的,即使对于比普通随机决策树更强大的模型也是如此。我们的结果与具有噪声输入的决策树的已知结果相关,但在质量上以各种方式加强。
更新日期:2020-06-22
down
wechat
bug