当前位置: X-MOL 学术IEEE Trans. Inform. Theory › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Trace Reconstruction: Generalized and Parameterized
IEEE Transactions on Information Theory ( IF 2.5 ) Pub Date : 2021-03-17 , DOI: 10.1109/tit.2021.3066010
Akshay Krishnamurthy , Arya Mazumdar , Andrew McGregor , Soumyabrata Pal

In the beautifully simple-to-state problem of trace reconstruction, the goal is to reconstruct an unknown binary string $x$ given random “traces” of $x$ where each trace is generated by deleting each coordinate of $x$ independently with probability $p < 1$ . The problem is well studied both when the unknown string is arbitrary and when it is chosen uniformly at random. For both settings, there is still an exponential gap between upper and lower sample complexity bounds and our understanding of the problem is still surprisingly limited. In this paper, we consider natural parameterizations and generalizations of this problem in an effort to attain a deeper and more comprehensive understanding. Perhaps our most surprising results are: 1) We prove that $\exp (O(n^{1/4} \sqrt {\log n}))$ traces suffice for reconstructing arbitrary matrices. In the matrix version of the problem, each row and column of an unknown $\sqrt {n}\times \sqrt {n}$ matrix is deleted independently with probability $p$ . Our results contrasts with the best known results for sequence reconstruction where the best known upper bound is $\exp (O(n^{1/3}))$ . 2) An optimal result for random matrix reconstruction: we show that $\Theta (\log n)$ traces are necessary and sufficient. This is in contrast to the problem for random sequences where there is a super-logarithmic lower bound and the best known upper bound is $\exp ({O}(\log ^{1/3} n))$ . 3) We show that $\exp (O(k^{1/3}\log ^{2/3} n))$ traces suffice to reconstruct $k$ -sparse strings, providing an improvement over the best known sequence reconstruction results when $k = o(n/\log ^{2}\,\,n)$ . 4) We show that $\textrm {poly}(n)$ traces suffice if $x$ is $k$ -sparse and we additionally have a “separation” promise, specifically that the indices of 1 ’s in $x$ all differ by $\Omega (k \log n)$ .

中文翻译:

迹线重建:广义和参数化

在轨迹重建的一个非常简单的状态问题中,目标是重建一个未知的二进制字符串 $ x $ 给定随机的“痕迹” $ x $ 通过删除的每个坐标生成每个轨迹 $ x $ 独立于概率 $ p <1 $ 。当未知字符串是任意的并且随机地均匀选择它时,这个问题都得到了很好的研究。对于这两种设置,样本复杂度的上限和下限之间仍然存在指数差距,并且我们对问题的理解仍然令人惊讶地受到限制。在本文中,我们考虑对这个问题进行自然的参数化和推广,以期获得更深入,更全面的理解。也许我们最令人惊讶的结果是:1)我们证明 $ \ exp(O(n ^ {1/4} \ sqrt {\ log n}))$ 痕迹足以重建任意矩阵。在问题的矩阵版本中,未知的每一行和每一列 $ \ sqrt {n} \ times \ sqrt {n} $ 概率独立删除矩阵 $ p $ 。我们的结果与序列重建的最著名结果形成了鲜明对比,在序列重建中,最著名的上限是 $ \ exp(O(n ^ {1/3}))$ 。2)随机矩阵重建的最佳结果: $ \ Theta(\ log n)$ 痕迹是必要和充分的。与之相对的是随机序列的问题,在随机序列中,存在一个超对数下限,而最著名的上限是 $ \ exp({O}(\ log ^ {1/3} n))$ 。3)我们证明 $ \ exp(O(k ^ {1/3} \ log ^ {2/3} n))$ 痕迹足以重建 $ k $ -稀疏字符串,在最著名的序列重建结果上提供了改进 $ k = o(n / \ log ^ {2} \,\,n)$ 。4)我们证明 $ \ textrm {poly}(n)$ 跟踪是否满足 $ x $ $ k $ -sparse,我们还有一个“分离”的承诺,特别是 1个 $ x $ 都因 $ \ Omega(k \ log n)$
更新日期:2021-05-25
down
wechat
bug