当前位置: X-MOL 学术arXiv.cs.CC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Hardness of Bounded Distance Decoding on Lattices in $\ell_p$ Norms
arXiv - CS - Computational Complexity Pub Date : 2020-03-17 , DOI: arxiv-2003.07903
Huck Bennett, Chris Peikert

$ \newcommand{\Z}{\mathbb{Z}} \newcommand{\eps}{\varepsilon} \newcommand{\cc}[1]{\mathsf{#1}} \newcommand{\NP}{\cc{NP}} \newcommand{\problem}[1]{\mathrm{#1}} \newcommand{\BDD}{\problem{BDD}} $Bounded Distance Decoding $\BDD_{p,\alpha}$ is the problem of decoding a lattice when the target point is promised to be within an $\alpha$ factor of the minimum distance of the lattice, in the $\ell_{p}$ norm. We prove that $\BDD_{p, \alpha}$ is $\NP$-hard under randomized reductions where $\alpha \to 1/2$ as $p \to \infty$ (and for $\alpha=1/2$ when $p=\infty$), thereby showing the hardness of decoding for distances approaching the unique-decoding radius for large $p$. We also show fine-grained hardness for $\BDD_{p,\alpha}$. For example, we prove that for all $p \in [1,\infty) \setminus 2\Z$ and constants $C > 1, \eps > 0$, there is no $2^{(1-\eps)n/C}$-time algorithm for $\BDD_{p,\alpha}$ for some constant $\alpha$ (which approaches $1/2$ as $p \to \infty$), assuming the randomized Strong Exponential Time Hypothesis (SETH). Moreover, essentially all of our results also hold (under analogous non-uniform assumptions) for $\BDD$ with preprocessing, in which unbounded precomputation can be applied to the lattice before the target is available. Compared to prior work on the hardness of $\BDD_{p,\alpha}$ by Liu, Lyubashevsky, and Micciancio (APPROX-RANDOM 2008), our results improve the values of $\alpha$ for which the problem is known to be $\NP$-hard for all $p > p_1 \approx 4.2773$, and give the very first fine-grained hardness for $\BDD$ (in any norm). Our reductions rely on a special family of "locally dense" lattices in $\ell_{p}$ norms, which we construct by modifying the integer-lattice sparsification technique of Aggarwal and Stephens-Davidowitz (STOC 2018).

中文翻译:

$\ell_p$范数中晶格有界距离解码的硬度

$ \newcommand{\Z}{\mathbb{Z}} \newcommand{\eps}{\varepsilon} \newcommand{\cc}[1]{\mathsf{#1}} \newcommand{\NP}{\cc {NP}} \newcommand{\problem}[1]{\mathrm{#1}} \newcommand{\BDD}{\problem{BDD}} $Bounded Distance Decoding $\BDD_{p,\alpha}$ 是在 $\ell_{p}$ 范数中,当目标点承诺在晶格最小距离的 $\alpha$ 因子内时,解码晶格的问题。我们证明 $\BDD_{p, \alpha}$ 在随机缩减下是 $\NP$-hard,其中 $\alpha \to 1/2$ as $p \to \infty$(并且对于 $\alpha=1/ 2$ 当 $p=\infty$),从而显示了对于接近大 $p$ 的唯一解码半径的距离的解码难度。我们还展示了 $\BDD_{p,\alpha}$ 的细粒度硬度。例如,我们证明对于所有 $p \in [1,\infty) \setminus 2\Z$ 和常数 $C > 1, \eps > 0$,没有 $2^{(1-\eps)n/C}$-time 算法用于 $\BDD_{p,\alpha}$ 对于某些常量 $\alpha$(接近 $1/2$ 作为 $p \to \infty$),假设随机强指数时间假设(SETH)。此外,基本上我们的所有结果(在类似的非均匀假设下)也适用于带有预处理的 $\BDD$,其中可以在目标可用之前将无界预计算应用于格子。与 Liu、Lyubashevsky 和 ​​Micciancio(APPROX-RANDOM 2008)之前关于 $\BDD_{p,\alpha}$ 硬度的工作相比,我们的结果改进了已知问题的 $\alpha$ 值$\NP$-hard 对于所有 $p > p_1 \approx 4.2773$,并给出 $\BDD$ 的第一个细粒度硬度(在任何规范中)。我们的减少依赖于 $\ell_{p}$ 范数中的一个特殊的“局部密集”格族,
更新日期:2020-03-19
down
wechat
bug