Theoretical Computer Science ( IF 0.747 ) Pub Date : 2020-08-27 , DOI: 10.1016/j.tcs.2020.08.025
We consider the problem of learning k-parities in the online mistake-bound model: given a hidden vector $x\in {\left\{0,1\right\}}^{n}$ where the hamming weight of x is k and a sequence of “questions” ${a}_{1},{a}_{2},\dots \in {\left\{0,1\right\}}^{n}$, where the algorithm must reply to each question with $〈{a}_{i},x〉\phantom{\rule{0.40em}{0ex}}\left(\mathrm{mod}\phantom{\rule{0.25em}{0ex}}2\right)$, what is the best trade-off between the number of mistakes made by the algorithm and its time complexity? We improve the previous best result of Buhrman et al. [3] by an $\mathrm{exp}\left(k\right)$ factor in the time complexity.
Next, we consider the problem of learning k-parities in the PAC model in the presence of random classification noise of rate $\eta \in \left(0,1}{2}\right)$. Here, we observe that even in the presence of classification noise of non-trivial rate, it is possible to learn k-parities in time better than $\left(\begin{array}{c}n\\ k/2\end{array}\right)$, whereas the current best algorithm for learning noisy k-parities, due to Grigorescu et al. [9], inherently requires time $\left(\begin{array}{c}n\\ k/2\end{array}\right)$ even when the noise rate is polynomially small.