1 Introduction

Modern machine learning systems are extremely sensitive to training set contamination. Since sources of error and noise are unavoidable in real-world data (e.g., due to Mechanical Turkers, selection bias, or adversarial attacks), an urgent need has arisen to perform automatic debugging of large data sets. Cadamuro et al. (2016), Zhang et al. (2018) proposed a method called “machine learning debugging” to identify training set errors by introducing new clean data. Consider the following real-world scenario: Company A collects movie ratings for users on a media platform, from which it learns relationships between features of movies and ratings in order to perform future recommendations. A competing company B knows A’s learning method and hires some users to provide malicious ratings. Company A could employ a robust method for learning contaminated data—but in the long run, it would be more effective for company A to identify the adversarial users and prevent them from submitting additional buggy ratings in the future. This distinguishes debugging from classical learning. The debugging problem also assumes that company A can hire an expert to help rate movies, from which it obtains a second trusted data set which is generally smaller than the original data set due to budget limitations. In this paper, we will study a theoretical framework for the machine learning debugging problem in a linear regression setting, where the main goal is to identify bugs in the data. We will also discuss theory and algorithms for selecting the trusted data set.

Our first contribution is to provide a rigorous theoretical framework explaining how to identify errors in the “buggy” data pool. Specifically, we embed a squared loss term applied to the trusted data pool into the extended Lasso algorithm proposed by Nguyen and Tran (2013), and reformulate the objective to better service the debugging task. Borrowing techniques from robust statistics (Huber and Ronchetti 2011; She and Owen 2011; Nguyen and Tran 2013; Foygel and Mackey 2014; Slawski and Ben-David 2017) and leveraging results on support recovery analysis (Wainwright 2009; Meinshausen and Yu 2009), we provide sufficient conditions for successful debugging in linear regression. We emphasize that our setting, involving data coming from multiple pools, has not been studied in any of the earlier papers.

The work of Nguyen and Tran (2013), Foygel and Mackey (2014) [and more recently, Sasai and Fujisawa (2020)] provided results for the extended Lasso with a theoretically optimal choice of tuning parameter, which depends on the unknown noise variance in the linear model. Our second contribution is to discuss a rigorous procedure for tuning parameter selection which does not require such an assumption. Specifically, our algorithm starts from a sufficiently large initial tuning parameter that produces the all-zeros vector as an estimator. Assuming the sufficient conditions for successful support recovery are met, this tuning parameter selection algorithm is guaranteed to terminate with a correct choice of tuning parameter after a logarithmic number of steps. Note that when outliers exist in the training data set, it is improper to use cross-validation to select the tuning parameter due to possible outliers in the validation data set.

Our third contribution considers how to design a second clean data pool, which is an important but previously unstudied problem in machine learning debugging. We consider a two-player “game” between a bug generator and debugger, where the bug generator performs adversarial attacks (Chakraborty et al. 2018), and the debugger applies Lasso-based linear regression to the augmented data set. On the theoretical side, we establish a sufficient condition under which the debugger can always beat the bug generator, and show how to translate this condition into a debugging strategy based on mixed integer linear programming. Our theory is only derived in the “noiseless” setting; nonetheless, empirical simulations show that our debugging strategy also performs well in the noisy setting. We experimentally compare our method to two other algorithms motivated by the machine learning literature, which involve designing two neural networks, one to correct labels and one to fit cleaned data (Veit et al. 2017); and a method based on semi-supervised learning that weights the noisy and clean datasets differently and employs a similarity matrix based on the graph Laplacian (Fergus et al. 2009).

The remainder of the paper is organized as follows: Sect. 2 introduces our novel framework for machine learning debugging using weighted M-estimators. Section 3 provides theoretical guarantees for recovery of buggy data points. Section 4 presents our algorithm for tuning parameter selection and corresponding theoretical guarantees. Section 5 discusses strategies for designing the second pool. Section 6 provides experimental results. Section 7 concludes the paper.

Notation We write \(\varLambda _{\min }(A)\) and \(\varLambda _{\max }(A)\) to denote the minimum and maximum eigenvalues, respectively, of a matrix A. We use Null(A) to denote the nullspace of A. For subsets of row and column indices S and T, we write \(A_{S,T}\) to denote the corresponding submatrix of A. We write \(\Vert A\Vert _{\max }\) to denote the elementwise \(\ell _\infty\)-norm, \(\Vert A\Vert _2\) to denote the spectral norm, and \(\Vert A\Vert _\infty\) to denote the \(\ell _\infty\)-operator norm. For a vector \(v \in \mathbb {R}^n\), we write \({\text {supp}}(v) \subseteq \{1, \dots , n\}\) to denote the support of v, and \(\Vert v\Vert _{\infty } = \max |v_i|\) to denote the maximum absolute entry. We write \(\Vert v\Vert _p\) to denote the \(\ell _p\)-norm, for \(p \ge 1\). We write \({\text {diag}}(v)\) to denote the \(n \times n\) diagonal matrix with entries equal to the components of v. For \(S \subseteq \{1, \dots , n\}\), we write \(v_S\) to denote the |S|-dimensional vector obtained by restricting v to S. We write [n] as shorthand for \(\{1, \dots , n\}\).

2 Problem formulation

We first formalize the data-generating models analyzed in this paper. Suppose we have observation pairs \(\{(x_i, y_i)\}_{i=1}^n\) from the contaminated linear model

$$\begin{aligned} y_i = x_i^\top \beta ^*+ \gamma ^*_i + \epsilon _i, \quad 1 \le i \le n, \end{aligned}$$
(1)

where \(\beta ^*\in \mathbb {R}^p\) is the unknown regression vector, \(\gamma ^*\in \mathbb {R}^n\) represents possible contamination in the labels, and the \(\epsilon _i\)’s are i.i.d. sub-Gaussian noise variables with variance parameter \(\sigma ^2\). We also assume the \(x_i\)’s are i.i.d. and \(x_i \perp \!\!\! \perp \epsilon _i\). This constitutes the “first pool.” Note that the vector \(\gamma ^*\) is unknown and may be generated by some adversary. If \(\gamma ^*_i = 0\), the \(i\)th point is uncontaminated and follows the usual linear model; if \(\gamma ^*_i \ne 0\), the \(i\)th point is contaminated/buggy. Let \(T : = {\text {supp}}(\gamma ^*)\) denote the indices of the buggy points, and let \(t := |T|\) denote the number of bugs.

We also assume we have a clean data set which we call the “second pool.” We observe \(\{(\widetilde{x}_i,\widetilde{y}_i)\}_{i=1}^m\) satisfying

$$\begin{aligned} \widetilde{y}_i = \widetilde{x}_i^\top \beta ^*+ \widetilde{\epsilon }_i, \quad 1 \le i \le m, \end{aligned}$$
(2)

where the \(\widetilde{\epsilon }_i\)’s are i.i.d. sub-Gaussian noise variables with parameter \(\widetilde{\sigma }^2\). Let \(L := \frac{\sigma }{\widetilde{\sigma }}\), and suppose \(L \ge 1\). Unlike the first pool, the data points in the second pool are all known to be uncontaminated.

For notational convenience, we also use \(X \in \mathbb {R}^{n \times p}\), \(y \in \mathbb {R}^n\), and \(\epsilon \in \mathbb {R}^m\) to denote the matrix/vectors containing the \(x_i\)’s, \(y_i\)’s, and \(\epsilon _i\)’s, respectively. Similarly, we define the matrices \(\widetilde{X}\in \mathbb {R}^{m \times p}, \widetilde{y}\in \mathbb {R}^m\), and \(\widetilde{\epsilon }\in \mathbb {R}^m\). Note that \(\beta ^*, \gamma ^*, T, t\), and the noise parameters \(\sigma\) and \(\widetilde{\sigma }\) are all assumed to be unknown to the debugger. In this paper, we will work in settings where \(X^\top X\) is invertible.

Goal: Upon observing \(\{(x_i, y_i)\}_{i=1}^n\), the debugger is allowed to design m points \(\widetilde{X}\) in a stochastic or deterministic manner and query their corresponding labels \(\widetilde{y}\), with the goal of recovering the support of \(\gamma ^*\). We have the following definitions:

Definition 1

An estimator \(\widehat{\gamma }\) satisfies subset support recovery if \({\text {supp}}(\widehat{\gamma }) \subseteq {\text {supp}}(\gamma ^*)\). It satisfies exact support recovery if \({\text {supp}}(\widehat{\gamma }) = {\text {supp}}(\gamma ^*)\).

In words, when \(\widehat{\gamma }\) satisfies subset support recovery, all estimated bugs are true bugs. When \(\widehat{\gamma }\) satisfies exact support recovery, the debugger correctly flags all bugs. We are primarily interested in exact support recovery.

Weighted M-estimation Algorithm: We propose to optimize the joint objective

$$\begin{aligned} \begin{aligned} (\widehat{\beta }, \widehat{\gamma }) \in&\arg \min _{\beta \in \mathbb {R}^p, \gamma \in \mathbb {R}^n} \left\{ \frac{1}{2n} \Vert y - X\beta - \gamma \Vert _2^2 + \frac{\eta }{2m} \Vert \widetilde{y}- \widetilde{X}\beta \Vert _2^2 + \lambda \Vert \gamma \Vert _1\right\} , \end{aligned} \end{aligned}$$
(3)

where the weight parameter \(\eta > 0\) determines the relative importance of the two data pools. The objective function applies the usual squared loss to the points in the second pool and introduces the additional variable \(\gamma\) to help identify bugs in the first pool. Furthermore, the \(\ell _1\)-penalty encourages \(\widehat{\gamma }\) to be sparse, since we are working in settings where the number of outliers is relatively small compared to the total number of data points. Note that the objective function (3) may equivalently be formulated as a weighted sum of M-estimators applied to the first and second pools, where the loss for the first pool is the robust Huber loss and the loss for the second pool is the squared loss (cf. Proposition 4 in Appendix A).

Lasso Reformulation: Recall that our main goal is to estimate (the support of) \(\gamma ^*\) rather than \(\beta ^*\). Thus, we will restrict our attention to \(\gamma ^*\) by reformulating the objectives appropriately. We first introduce some notation: Define the stacked vectors/matrices

$$\begin{aligned} X' = \begin{pmatrix} X \\ \sqrt{\frac{\eta n}{m}} \widetilde{X}\end{pmatrix}, \quad y' = \begin{pmatrix} y \\ \sqrt{\frac{\eta n}{m}} \widetilde{y}\end{pmatrix}, \quad \epsilon ' = \begin{pmatrix} \epsilon \\ \sqrt{\frac{\eta n}{m}} {\tilde{\epsilon }} \end{pmatrix}, \end{aligned}$$
(4)

where \(X' \in \mathbb {R}^{(m+n) \times p}\) and \(y', \epsilon ' \in \mathbb {R}^{m+n}\). For a matrix A, let \(P_A=A(A^\top A)^{-1}A^\top\) and \(P_A^\perp =I - A(A^\top A)^{-1}A^\top\) denote projection matrices onto the range of the column space of A and its orthogonal complement, respectively. For a matrix \(S \subseteq [n]\), let \(M_S\) denote the \((n+m) \times |S|\) matrix with ith column equal to the canonical vector \(e_{S(i)}\). Thus, right-multiplying by \(M_S\) truncates a matrix to only include columns indexed by S. We have the following useful result:

Proposition 1

The objective function

$$\begin{aligned} \widehat{\gamma }\in \arg \min _{\gamma \in \mathbb {R}^n} \Big \{\frac{1}{2n} \Vert P_{X'}^\perp y' - P_{X'}^\perp M_{[n]} \gamma \Vert _2^2 + \lambda \Vert \gamma \Vert _1\Big \} \end{aligned}$$
(5)

shares the same solution for \(\widehat{\gamma }\) with the objective function (3).

Proposition 1, proved in Appendix B, translates the joint optimization problem (3) into an optimization problem only involving the parameter of interest \(\gamma\). We provide a discussion regarding the corresponding solution \(\widehat{\beta }\) in Appendix A for the interested reader. Note that the optimization problem (5) corresponds to linear regression of the vector/matrix pairs \((P_{X'}^\perp y', P_{X'}^\perp M_{[n]})\) with a Lasso penalty, inspiring us to borrow techniques from high-dimensional statistics.

3 Support recovery

The reformulation (5) allows us to analyze the machine learning debugging framework through the lens of Lasso support recovery. The three key conditions we impose to ensure support recovery are provided below. Recall that we use \(M_T\) to represent the truncation matrix indexed by T.

Assumption 1

(Minimum Eigenvalue) Assume that there is a positive number \(b'_{\min }\) such that

$$\begin{aligned} \varLambda _{\min }\left( M_T^\top P_{X'}^\perp M_T\right) \ge b'_{\min }. \end{aligned}$$
(6)

Assumption 2

[Mutual Incoherence] Assume that there is a number \(\alpha ' \in [0,1)\) such that

$$\begin{aligned} \Vert M_{T^c}^\top P_{X'}^\perp M_T (M_T^\top P_{X'}^\perp M_T)^{-1} \Vert _\infty \le \alpha '. \end{aligned}$$
(7)

Assumption 3

(Gamma-Min) Assume that

$$\begin{aligned} \begin{aligned} \min _{i \in T} |\gamma ^*_i| > G'&:= \Vert (M_T^\top P_{X'}^\perp M_T)^{-1}M_T^\top P_{X'}^\perp \epsilon '\Vert _\infty + n\lambda \left\| {(M_T^\top P_{X'}^\perp M_T)^{-1}}\right\| _{\infty }. \end{aligned} \end{aligned}$$
(8)

Assumption 1 comes from a primal-dual witness argument (Wainwright 2009) to guarantee that the minimizer \(\widehat{\gamma }\) is unique. Assumption 2 measures a relationship between the sets \(T^c\) and T, indicating that the large number of nonbuggy covariates (i.e., \(T^c\)) cannot exert an overly strong effect on the subset of buggy covariates (Ravikumar et al. 2010). To aid intuition, consider an orthogonal design, where \(X = \begin{bmatrix} c I_{[t], [p]} \\ c' I_{p \times p} \end{bmatrix}\) and \(\widetilde{X}= c'' I_{p \times p}\), for some \(t < p\), and \(c, c', c'' > 0\). We use the notation \(I_{[t], [p]}\) to denote a submatrix of \(I_{p \times p }\) with rows indexed by the set [t]. Suppose the first t points are bugs, and for simplicity, let \(\eta = m/n\). Then the mutual incoherence condition requires \(c < c' + \frac{(c'')^2}{c'}\), meaning that in every direction \(e_i\), the component of buggy data cannot be too large compared to the nonbuggy data and the clean data. Assumption 3 lower-bounds the minimum absolute value of elements of \(\gamma\). Note that \(\lambda\) is chosen based on \(\epsilon '\), so the right-hand expression is a function of \(\epsilon '\). This assumption indeed captures the intuition that the signal-to-noise ratio, \(\frac{\min _{i\in T}{|\gamma ^*_i|}}{\sigma }\), needs to be sufficiently large.

We now provide two general theorems regarding subset support recovery and exact support recovery.

Theorem 1

(Subset support recovery) Suppose \(P_{X'}^\perp\) satisfies Assumptions 1and 2. If the tuning parameter satisfies

$$\begin{aligned} \begin{aligned} \lambda&\ge \frac{2}{1-\alpha '} \Big \Vert M_{T^c} P_{X'}^\perp \Big (I - P_{X'}^\perp M_T(M_T^\top P_{X'}^\perp M_T)^{-1}M_T^\top P_{X'}^\perp \Big )\frac{\epsilon '}{n}\Big \Vert _\infty , \end{aligned} \end{aligned}$$
(9)

then the objective (5) has a unique optimal solution \(\widehat{\gamma }\), satisfying \({\text {supp}}(\widehat{\gamma }) \subseteq {\text {supp}}(\gamma ^*)\) and \(\left\Vert \widehat{\gamma }- \gamma ^*\right\Vert _{\infty } \le G'\).

Theorem 2

(Exact support recovery) In addition to the assumptions in Theorem 1, suppose Assumption 3holds. Then we have a unique optimal solution \(\widehat{\gamma }\), which satisfies exact support recovery.

Note that we additionally need Assumption 3 to guarantee exact support recovery. This follows the aforementioned intuition regarding the assumption. In particular, recall that \(\epsilon\) and \(\widetilde{\epsilon }\) are sub-Gaussian vectors with parameters \(\sigma ^2\) and \(\sigma ^2/L\), respectively, where \(L \ge 1\) (i.e., the clean data pool has smaller noise). The minimum signal strength \(\min _{i\in T}{|\gamma ^*_i|}\) needs to be at least \(\varTheta (\sigma \sqrt{\log n})\), since \(\mathbb {E}\left[ \max _{i \in [n]} |\epsilon _i|\right] \le \sigma \sqrt{2 \log (2n)}\). Intuitively, if \(\min _{i\in T}{|\gamma ^*_i|}\) is of constant order, it is difficult for the debugger to distinguish between random noise and intentional contamination.

We now present two special cases to illustrate the theoretical benefits of including a second data pool. Although Theorems 1 and 2 are stated in terms of deterministic design matrices and error vectors \(\epsilon\) and \(\widetilde{\epsilon }\), the assumptions can be shown to hold with high probability in the example. We provide formal statements of the associated results in Appendix C.2 and Appendix C.3.

Example 1

(Orthogonal design) Suppose Q is an orthogonal matrix with columns \(q_1, q_2, \dots , q_p\), and consider the setting where \(X_T = RQ^\top \in \mathbb {R}^{t \times p}\) and \(X_{T^c} = FQ^\top \in \mathbb {R}^{p \times p}\), where \(R = \left[ {\text {diag}}(\{r_i\}_{i=1}^t) \mid \varvec{0}_{t\times (p-t)}\right]\) and \(F = {\text {diag}}(\{f_i\}_{i=1}^p)\). Thus, points in the contaminated first pool correspond to orthogonal vectors. Similarly, suppose the second pool consists of (rescaled) columns of Q, so \(\widetilde{X}= WQ^\top \in \mathbb {R}^{m \times p}\), where \(W = {\text {diag}}(\{w_i\}_{i=1}^p)\). (To visualize this setting, one can consider \(Q = I\) as a special case.) The mutual incoherence parameter is \(\alpha ' = \max _{1 \le i \le t}\left| \frac{r_i f_i}{f_i^2 + \eta \frac{n}{m}w_i^2}\right|\). Hence, \(\alpha ' < 1\) if the weight of a contaminated point dominates the weight of a clean point in any direction, e.g., when \(|r_i| > |f_i|\) and \(w_i = 0\); in contrast, if the second pool includes clean points \(w_iq_i\) with sufficiently large \(|w_i|\), we can guarantee that \(\alpha ' < 1\). Furthermore,

$$\begin{aligned} G'\approx & \sigma \left( \sqrt{2\log t}+c\right) \sqrt{1+\max _{1\le i \le t}\frac{r_i^2(Lf_i^2+\frac{\eta n}{m}w_i^2)}{L(f_i^2 + \frac{\eta n}{m}w_i^2)^2}} \\&+ \frac{2\sigma }{1-\alpha '} \left( \sqrt{\log 2(n-t)} +C\right) \left( 1+\max _{1\le i \le t}\frac{r_i^2}{f_i^2+\frac{\eta n}{m}w_i^2}\right) \end{aligned}$$

for some constant C. It is not hard to verify that \(G'\) decreases by adding a second pool. Further note that the behavior of the non-buggy subspace, \(\mathrm {span}\{q_{t+1}, \dots , q_p\}\), is not involved in any conditions or conclusions. Thus, our key observation is that the theoretical results for support recovery consistency only rely on the addition of second-pool points in buggy directions.

Example 2

(Random design) Consider a random design setting where the rows of X and \(\widetilde{X}\) are drawn from a common sub-Gaussian distribution with covariance \(\varSigma\). The conditions in Assumptions 13 are relaxed in the presence of a second data pool when n and m are large compared to p: First, \(b_{\min }'\) increases by adding a second pool. Second, \(\alpha ' \approx \frac{\Vert X_{T^c}\varSigma ^{-1}X_T\Vert _\infty }{n-t+\eta n}\), so the mutual incoherence parameter also decreases by adding a second pool. Third,

$$\begin{aligned} G' \approx \frac{2\sigma \sqrt{\log t}}{b'_{\min }} + \frac{2\sigma }{1-\alpha '}\max \left\{ 1,\sqrt{\frac{\eta n}{mL}}\right\} \left\| (I_{t\times t} - \frac{X_T\varSigma ^{-1}X_T^\top }{n+\eta n})^{-1}\right\| _\infty , \end{aligned}$$

where \(X_T\) and \(X_{T^c}\) represent the submatrices of X with rows indexed by T and \(T^c\), respectively. Note that the one-pool case corresponds to \(\eta = 0\) and \(\left\| (I_{t\times t} - \frac{X_T\varSigma ^{-1}X_T^\top }{n+\eta n})^{-1}\right\| _\infty < \left\| (I_{t\times t} - \frac{X_T\varSigma ^{-1}X_T^\top }{n})^{-1}\right\| _\infty\), so if we choose \(\eta \le \frac{mL}{n}\), then \(G'\) decreases by adding a second pool. Therefore, all three assumptions are relaxed by having a second pool, making it easier to achieve exact support recovery.

We also briefly discuss the three assumptions with respect to the weight parameter \(\eta\): Increasing \(\eta\) always relaxes the eigenvalue and mutual incoherence conditions, so placing more weight on the second pool generally helps with subset support recovery. However, the same trend does not necessarily hold for exact recovery. This is because a larger value of \(\eta\) causes the lower bound (9) on \(\lambda\) to increase, resulting in a stricter gamma-min condition. Therefore, there is a tradeoff for selecting \(\eta\).

4 Tuning parameter selection

A drawback of the results in the previous section is that the proper choice of tuning parameter depends on a lower bound (9) which cannot be calculated without knowledge of the unknown parameters \((T, \alpha ', \epsilon ')\). The tuning parameter \(\lambda\) determines how many outliers a debugger detects; if \(\lambda\) is large, then \(\widehat{\gamma }\) contains more zeros and the algorithm detects fewer bugs. A natural question arises: In settings where the conditions for exact support recovery hold, can we select a data-dependent tuning parameter that correctly identifies all bugs? In this section, we propose an algorithm which answers this question in the affirmative.

4.1 Algorithm and theoretical guarantees

Our tuning parameter selection algorithm is summarized in Algorithm 1, which searches through a range of parameter values for \(\lambda\), starting from a large value \(\lambda _u\) and then halving the parameter on each successive step until a stopping criterion is met. The intuition is as follows: First, let \(\lambda ^*\) be the right-hand expression of inequality (9). Suppose that for any value in \(I = [\lambda ^*, 2 \lambda ^*]\), support recovery holds. Then given \(\lambda _u > \lambda ^*\), the geometric series \(\varLambda = \left\{ \lambda _u, \frac{\lambda _u}{2}, \frac{\lambda _u}{4}, \dots \right\}\) must contain at least one correct parameter for exact support recovery since \(\varLambda \cap I \ne \emptyset\), guaranteeing that the algorithm stops. As for the stopping criterion, let \(X_S\) denote the submatrix of X with rows indexed by S for \(T^c \subseteq S \subseteq [n]\). We have \(P_{X_S}^\perp \overset{|S| \rightarrow \infty }{\longrightarrow } \left( 1-\frac{p}{|S|)}\right) I\) under some mild assumptions on X, in which case \(P_{X_S}^\perp y_S \rightarrow \left( 1-\frac{p}{|S|}\right) (\gamma ^*_S + \epsilon _S)\). When \(\lambda\) is large and the conditions hold for subset support recovery but not exact recovery, we have \(S \cap T \ne \emptyset\), so

$$\begin{aligned} \min |P_{X_S}^\perp y_S| \ge \left( 1-\frac{p}{|S|}\right) \left( \min |\gamma ^*_T| - \max _{i\in [n]} |\epsilon _i|\right) . \end{aligned}$$

In contrast, when \(S = T^c\), we have

$$\begin{aligned} \min |P_{X_S}^\perp y_S| \le \left( 1-\frac{p}{|S|}\right) \max _{i\in [n]} |\epsilon _i|. \end{aligned}$$

When \(\min |\gamma ^*_T|\) is large enough, the task then reduces to choosing a proper threshold to distinguish the error \(|\epsilon _{T^c}|\) from the bug signal \(|\gamma ^*_T|\), which occurs when the threshold is chosen between \(\max _i|\epsilon _i|\) and \(\min _{i\in T}|\gamma ^*_i| - \max _i|\epsilon _i|\).

figure a

With the above intuition, we now state our main result concerning exact recovery guarantees for our algorithm. Recall that the \(\epsilon _i\)’s are sub-Gaussian with parameter \(\sigma ^2\).

Let \(c_t := \frac{t}{n} < \frac{1}{2}\) denote the fraction of outliers. We assume knowledge of a constant \({\bar{c}}\) that satisfies \(c_t + {\mathbb {P}}[|\epsilon _i| \le {\bar{c}} \sigma ] < \frac{1}{2}\). Note that a priori knowledge of \({\bar{c}}\) is a less stringent assumption than knowing \(\sigma\), since we can always choose \({\bar{c}}\) to be close to zero. For instance, if we know the \(\epsilon _i\)’s are Gaussian, we can choose \({\bar{c}} < \mathrm {erf}^{-1}(\frac{1}{2} - c_t)\); in practice, we can usually estimate \(c_t\) to be less than \(\frac{1}{3}\), so we can take \({\bar{c}} = \mathrm {erf}^{-1}(\frac{1}{6})\). As shown later, the tradeoff is that having a larger value of \({\bar{c}}\) provides the desired guarantees under weaker requirements on the lower bound of \(\min _{i \in T} |\gamma ^*_i|\). Hence, if we know more about the shape of the error distribution, we can be guaranteed to detect bugs of smaller magnitudes. We will make the following assumption on the design matrix:

Assumption 4

There exists a \(p \times p\) positive definite matrix \(\varSigma\), with bounded minimum and maximum eigenvalues, such that for all \(X^{(k)}\) appearing in the while loop of Algorithm 1, we have

$$\begin{aligned} \left\| \frac{X^{(k)} \varSigma ^{-1} X^{(k) \top } }{p} - I\right\| _{\max }&\le c\max \left\{ \sqrt{\frac{\log l^{(k)}}{p}},\frac{\log l^{(k)}}{p} \right\} , \nonumber \\ \left\Vert \frac{X^{(k) \top }X^{(k)}}{l^{(k)}} - \varSigma \right\Vert _2&\le \frac{\lambda _{\min }(\varSigma )}{2}, \end{aligned}$$
(10)

where \(l^{(k)}\) is the number of rows of the matrix \(X^{(k)}\) and c is a universal constant.

This assumption is a type of concentration result, which we will show holds w.h.p. in some random design settings in the following proposition:

Proposition 2

Suppose the \(x_i\)s are i.i.d. and satisfy any of the following additional conditions:

(a):

the \(x_i\)s are Gaussian and the spectral norm of the covariance matrix is bounded;

(b):

the \(x_i\)s are sub-Gaussian with mean zero and independent coordinates, and the spectral norm of the covariance matrix is bounded; or

(c):

the \(x_i\)s satisfy the convex concentration property.

Then Assumption 4holds with probability at least \(1 - O(n^{-1})\).

The \(\varSigma\) matrix can be chosen as the covariance of X. In fact, Assumption 4 shows that \(P_{X^{(k)}}^\perp\) is approximately a scalar matrix. We now introduce some additional notation: For \(\nu > 0\), define \(c_\nu\) and \(C_\nu\) such that \(\nu = {\mathbb {P}}[|\epsilon _i| \le c_\nu \sigma ]\) and \(\nu = {\mathbb {P}}[|\epsilon _i| \ge C_\nu \sigma ]\). We write \(G'(\lambda )\) to denote the function of \(\lambda\) in the right-hand expression of inequality (8). Proofs of the theoretical results in this section are provided in Appendix D.

Theorem 3

Assume \(\nu\) is a constant satisfying \(\nu + c_t < \frac{1}{2}\). Suppose Assumption 4, the minimum eigenvalue condition, and the mutual incoherence condition hold. If

$$\begin{aligned} n \ge \max \left\{ \left[ \frac{24}{c_\nu }\right] ^{\frac{1}{c_n}}, \left[ \frac{C \log 2n}{1-c_t}(p^2+ \log ^2 n)\right] ^{\frac{1}{1-2c_n}}\right\} , \end{aligned}$$
(11)

where C is an absolute constant, and

$$\begin{aligned} \min _{i \in T}|\gamma ^*_i|&> \max \Bigg \{G'(2\lambda ^*),4\sqrt{\log (2n)}\sigma , \frac{5}{4}\sqrt{\log (2n)}\frac{c_\nu +5C_\nu }{{\bar{c}}}\sigma \Bigg \}, \nonumber \\ \Vert \gamma ^*\Vert _\infty&\le \frac{\sqrt{C} c_\nu }{16\sqrt{2}}\sqrt{1-c_t} \sqrt{ \log 2n}\, \frac{n^{1/2+c_n}}{t} \sigma , \end{aligned}$$
(12)

for some \(c_n \in (0,\frac{1}{2})\), then Algorithm 1 with inputs \({\bar{c}} < c_\nu\) and \(\lambda _u \ge \lambda ^*\) will return a feasible \(\widehat{\lambda }\) in at most \(\log _2\left( \frac{\lambda _u}{\lambda ^*}\right)\) iterations such that the Lasso estimator \(\widehat{\gamma }\) based on \(\widehat{\lambda }\) satisfies \({\text {supp}}(\widehat{\gamma }) = {\text {supp}}(\gamma ^*)\), with probability at least

$$\begin{aligned} 1 - \frac{3\log _2\left( \frac{\lambda _u}{\lambda ^*}\right) }{n-t} - 2\log _2\left( \frac{\lambda _u}{\lambda ^*}\right) \exp \left( -2\left( \frac{1}{2}-c_t-\nu \right) ^2n\right) . \end{aligned}$$

Theorem 3 guarantees exact support recovery for the output of Algorithm 1 without knowing \(\sigma\). Note that compared to the gamma-min condition (8) with \(\lambda = \lambda ^*\), the required lower bound (12) only differs by a constant factor. In fact, the constant 2 inside \(G'(2\lambda ^*)\) can be replaced by any constant \(c > 1\), but Algorithm 1 will then update \({\hat{\lambda }}^k = {\hat{\lambda }}^{k - 1}/c\) and require \(\log _c \left( \frac{\lambda _u}{\lambda ^*}\right)\) iterations. Further note that larger values of \(c_t\) translate into a larger sample size requirement, as \(n = \varOmega \left( \frac{1}{1-c_t}\right)\) for \(c_n\) being close to 0. A limitation of the theorem is the upper bound on \(\Vert \gamma ^*\Vert _\infty\), where t needs to be smaller than n in a nonlinear relationship. Also, n is required to be \(\varOmega (p^2)\). These two conditions are imposed in our analysis in order to guarantee that \(P_{X_S}^\perp y_S \rightarrow \left( 1-\frac{p}{|S|}\right) (\gamma ^*_S + \epsilon _S)\). We now present a result indicating a practical choice of \(\lambda _u\):

Corollary 1

Define

$$\begin{aligned} \lambda (\sigma ) := \frac{8\max \{1,\sqrt{\frac{\eta n}{Lm}}\}}{1-\alpha '}\sqrt{\log 2(n-t)}\frac{\Vert P_{X,T^c}^\perp \Vert _2}{n} \cdot c\sigma . \end{aligned}$$

Suppose Assumption 4, the minimum eigenvalue condition, and the mutual incoherence condition hold. Also assume conditions (11) and (12) hold when replacing \(\lambda ^*\) by \(\lambda (\sigma )\). Taking the input \(\lambda _u = \frac{2\Vert M_{[n]} P^\perp _{X'} y'\Vert _\infty }{n}\), Algorithm 1 outputs a parameter \(\widehat{\lambda }\) in \(O(\log n)\) iterations which provides exact support recovery, with probability at least

$$\begin{aligned}&1 - \frac{4\left( c'\log _2 n + \max \left\{ 0,\frac{1}{2}\log _2 \frac{\eta n}{mL}\right\} \right) }{n-t} \\&\quad - 2\left( c'\log _2 n + \frac{1}{2}\max \left\{ 0,\log _2 \frac{\eta n}{mL}\right\} \right) e^{-2\left( \frac{1}{2}-c_t-\nu \right) ^2n}. \end{aligned}$$

Note that \(\lambda _u\) can be calculated using the observed data set. Further note that the algorithm is guaranteed to stop after \(O(\log n)\) iterations, meaning it is sufficient to test a relatively small number of candidate parameters in order to achieve exact recovery.

5 Strategy for second pool design

We now turn to the problem of designing a clean data pool. In the preceding sections, we have discussed how a second data pool can aid exact recovery under sub-Gaussian designs. In practice, however, it is often unreasonable to assume that new points can be drawn from an entirely different distribution. Specifically, recall the movie rating example discussed in Sect. 1: The expert can only rate movies in the movie pool, say \(\{x_i\}_{i=1}^n\), whereas an arbitrarily designed \(\widetilde{x}\), e.g., \(\widetilde{x}= x_1 / 2\), is unlikely to correspond to an existing movie. Thus, we will focus on devising a debugging strategy where the debugger is allowed to choose points for the second pool which have the same covariates as points in the first pool.

In particular, we consider this problem in the “worst” case: suppose a bug generator can generate any \(\gamma ^*\in \Gamma := \{\gamma \in \mathbb {R}^n : {\text {supp}}(\gamma )| \le t\}\) and add it to the correct labels \(X\beta ^*\). We will also suppose the bug generator knows the debugger’s strategy. The debugger attempts to add a second data pool which will ensure that all bugs are detected regardless of the choice of \(\gamma ^*\). Our theory is limited to the noiseless case, where \(y = X\beta ^*+ \gamma ^*\) and \(\widetilde{y}= \widetilde{X}\beta ^*\); the noisy case is studied empirically in Sect. 6.3.3.

5.1 Preliminary analysis

We denote the debugger’s choice by \(\widetilde{x}_i = X^\top e_{\nu (i)}\), for \(i \in [m]\), where \(e_{\nu (i)} \in \mathbb {R}^{n}\) is a canonical vector and \(\nu : [m] \rightarrow [n]\) is injective. In matrix form, we write \(\widetilde{X}= X_{D}\), where \(D \subseteq [n]\) represents the indices selected by the debugger. Assume \(m < p\), so the debugger cannot simply use the clean pool to obtain a good estimate of \(\beta\). In the noiseless case, we can write the debugging algorithm as follows:

$$\begin{aligned} \begin{aligned} \min _{\beta \in \mathbb {R}^p,\gamma \in \mathbb {R}^n}&\Vert \gamma \Vert _1 \\ \text{ subject } \text{ to }&y = X\beta + \gamma , \ \widetilde{y}= \widetilde{X}\beta . \end{aligned} \end{aligned}$$
(13)

Similar to Proposition 1, given a \(\gamma\), we can pick \(\beta\) to satisfy the constraints, specifically \(\beta = \left( X^\top X + \widetilde{X}^\top \widetilde{X}\right) ^{-1} \left( X^\top (y-\gamma )+ \widetilde{X}^\top \widetilde{y}\right)\). Eliminating \(\beta\), we obtain the optimization problem

$$\begin{aligned} \begin{aligned} \min _{\gamma \in \mathbb {R}^n}\&\Vert \gamma \Vert _1 \\ \text{ subject } \text{ to }&\begin{bmatrix}y \\ \widetilde{y}\end{bmatrix} = \begin{bmatrix}X \\ \widetilde{X}\end{bmatrix} \left( X^\top X + \widetilde{X}^\top \widetilde{X}\right) ^{-1} \left( X^\top (y-\gamma )+ \widetilde{X}^\top \widetilde{y}\right) + \begin{bmatrix}\gamma \\ \mathbf {0} \end{bmatrix}. \end{aligned} \end{aligned}$$
(14)

Before presenting our results for support recovery, we introduce some definitions. Define the cone set \({\mathbb {C}}(K)\) for some subset \(K \subseteq [n]\) and \(|K| = t\):

$$\begin{aligned} {\mathbb {C}}(K) := \left\{ \varDelta \in \mathbb {R}^n: \Vert \varDelta _{K^c}\Vert _1 \le \Vert \varDelta _K\Vert _1\right\} . \end{aligned}$$
(15)

Further let \({\mathbb {C}}^A = \cup _{K \subseteq [n], |K| = t} {\mathbb {C}}(K)\), and define

$$\begin{aligned} \bar{P}(D) = \begin{bmatrix}I - X\left( X^\top X + X_D^\top X_D \right) ^{-1}X^\top \\ X_D\left( X^\top X + X_D^\top X_D \right) ^{-1}X^\top \end{bmatrix}. \end{aligned}$$

Theorem 4

Suppose

$$\begin{aligned} Null(\bar{P}(D)) \cap {\mathbb {C}}^A = \{\mathbf {0}\}. \end{aligned}$$
(16)

Then a debugger who queries the points indexed by D cannot be beaten by any bug generator who introduces at most t bugs.

Theorem 4 suggests that Eq. (16) is a sufficient condition for support recovery for an omnipotent bug generator who knows the subset D. As a debugger, the consequent goal is to find such a subset D which makes Eq. (16) true. Whether such a D exists and how to find it will be discussed in Sect. 5.2.

Remark 1

When \(m = n\), we can verify that \(Null(\bar{P}(D)) = \{\mathbf {0}\}\), which implies that Eq. (16) always holds. Indeed, in this case, we can simply take \(\widetilde{X}= X\) and solve for \(\beta ^*\) explicitly to recover \(\gamma ^*\).

Remark 2

As stated in Theorem 4, Eq. (16) is a sufficient condition for support recovery. In fact, it is an if-and-only-if condition for signed support recovery: When Eq. (16) holds, \({{\,\mathrm{sign}\,}}(\widehat{\gamma }) = {{\,\mathrm{sign}\,}}(\gamma ^*)\); and when it does not hold, the bug generator can find a \(\gamma ^*\) with \({\text {supp}}(\gamma ^*) \le t\) such that \({{\,\mathrm{sign}\,}}(\widehat{\gamma }) \ne {{\,\mathrm{sign}\,}}(\gamma ^*)\).

Remark 3

We can also write \(Null(\bar{P}(D))\) as

$$\begin{aligned} \{u \in \mathbb {R}^n \mid \exists v \in \mathbb {R}^p, \text{ s.t. } u = Xv, X_Dv = 0\}. \end{aligned}$$

Let \(\widehat{\beta }= \beta ^*+ v\) for some vector \(v \in \mathbb {R}^p\). From the constraint-based algorithm, we obtain

$$\begin{aligned} y_T&= X_T(\beta ^*+ v) + \widehat{\gamma }_T, \\ y_{T^c}&= X_{T^c} (\beta ^*+ v) + \widehat{\gamma }_{T^c},\ y_D = X_D(\beta ^*+ v), \end{aligned}$$

which implies that \(\widehat{\gamma }_T = \gamma ^*_T - X_Tv\) and \(\widehat{\gamma }_{T^c} = -X_{T^c} v ,\ X_D v = 0\). Let \(u = Xv\). Then we obtain \(\widehat{\gamma }= \gamma ^*- u\). As can be seen, Eq. (16) requires that \(u = \mathbf {0}\), which essentially implies \(\widehat{\gamma }= \gamma ^*\), and thus \({\text {supp}}(\widehat{\gamma }) = {\text {supp}}(\gamma ^*)\).

5.2 Optimal debugger via MILP

The above analysis is also useful in practice for providing a method for designing \(\widetilde{X}\). Consider the following optimization problem:

$$\begin{aligned}&\max _{K \subseteq [n], |K| \le t, u \in \mathbb {R}^n, v \in \mathbb {R}^d} \Vert u_K\Vert _1 - \Vert u_{K^c}\Vert _1, \end{aligned}$$
(17a)
$$\begin{aligned}&\text{ subject } \text{ to } u = Xv, X_D v = 0, \Vert u\Vert _\infty \le 1. \end{aligned}$$
(17b)

By Theorem 4 and Remark 3, we immediately conclude that if the problem (17) has the unique solution \((u,v) = (\mathbf {0}, \mathbf {0})\), then a debugger who queries the points indexed by D cannot be beaten by a bug generator who introduces at most t bugs.

Based on this argument, we can construct a bilevel optimization problem for the debugger to solve by further minimizing the objective (17a) with respect to \(D \subseteq [n]\) such that \(|D| \le m\). The optimization problem can then be transformed into a minimax MILP:

$$\begin{aligned} \begin{aligned} \min _{\xi \in \{0,1\}^n}&\max _{\begin{array}{c} a,a^+,a^-\in \mathbb {R}^n, \\ u, u^+, u^- \in \mathbb {R}^n, v \in \mathbb {R}^d,\\ z,w \in \{0,1\}^n \end{array}} \sum _{j=1}^n a_j^+ - a_j^-, \\ \text{ subject } \text{ to }&\Big \{ u = Xv, u = u^+ - u^-, u^+, u^- \ge 0, \\&\quad a = u^+ + u^-, u^+ \le z, \ u^- \le (\mathbb {1}_n-z), \\&\quad \sum _{i=1}^n w_i \le t, a^+ \le M w, \ a^- \le M(\mathbb {1}_n-w), \\&\quad a = a^+ + a^-, a^+ \ge 0, a^- \ge 0, \\&\quad \sum _{i=1}^n \xi _i \le m, u \le (\mathbb {1}_n - \xi ), u \ge - (\mathbb {1}_n - \xi ).\Big \} \end{aligned} \end{aligned}$$
(18)

Theorem 5

(MILP for debugging) If the optimization problem (18) has the unique solution \((u,v) = (\mathbf {0}, \mathbf {0})\), then the debugger can add m points indexed by \(D = {\text {supp}}(\xi )\) to achieve support recovery.

Remark 4

For more information on efficient algorithms for optimizing minimax MILPs, we refer the reader to the references Tang et al. (2016), Xu and Wang (2014), Zeng and An (2014).

6 Experiments

In this section, we empirically validate our Lasso-based debugging method for support recovery. The section is organized as follows:

  • Section 6.1, corresponding to Sect. 3, contains a number of experiments which investigate the performance of our proposed debugging formulation.

  • Section 6.2, corresponding to Sect. 4, studies the proposed tuning parameter selection procedure.

  • Section 6.3 studies the Lasso-based debugging method with a clean data pool, including the proposed MILP algorithm from Sect. 5.

We also compare our proposed method to alternative methods motivated by existing literature.

We begin with an outline of the experimental settings used in most of our experiments:

  1. S1

    Generate the feature design matrix \(X \in \mathbb {R}^{n \times p}\) by sampling each row i.i.d. from \({\mathcal {N}}(\mathbf {0}_p, I_{p\times p})\).

  2. S2

    Generate \(\beta ^*\in \mathbb {R}^p\), where each entry \(\beta ^*_i\) is drawn i.i.d. from \(Unif(-1,1)\).

  3. S3

    Generate \(\epsilon \in \mathbb {R}^n\), where each entry \(\epsilon _i\) is drawn i.i.d. from \({\mathcal {N}}(0,\sigma ^2)\).

  4. S4

    Generate the bug vector \(\gamma ^*\in \mathbb {R}^n\), where we draw \(\gamma ^*_i = (10\sqrt{\log (2n)}\sigma + Unif(0,10))\cdot Bernoulli(\pm 1, 0.5)\) for \(i \in [t]\) and take \(\gamma ^*_i = 0\) for the remaining positions.

  5. S5

    Generate the labels by \(y = X\beta ^*+ \epsilon + \gamma ^*\).

These five steps produce a synthetic dataset (Xy); we will specify the particular parameters \((n,p,t, \sigma )\) in each task. If we use a real dataset, the first step changes to [S1’]:

  1. S1’

    Given the whole data pool \(X_{real}\), uniformly sample n data points from it to construct X.

In the plot legends, we will refer to our Lasso-based debugging method as “debugging.” We may also invoke a postprocessing step on top of debugging, called “debugging + postprocess,” which first runs the Lasso optimization algorithm to obtain \(\widehat{\gamma }\) and an estimated support set \({\hat{T}}\), then removes the points \((X_{{\hat{T}},\cdot }, y_{{\hat{T}}})\) and runs ordinary least squares on the remaining points to obtain \(\widehat{\beta }\).

6.1 Support recovery

In this section, we design two experiments. The first experiment investigates the influence of the fraction of bugs \(c_t:=\frac{t}{n}\) on the three assumptions imposed in our theory and the resulting recovery rates. We will vary the design of X using different datasets. The second experiment compares debugging with four alternative regression methods, using the precision-recall metric. Note that we will take the tuning parameter \(\lambda = 2\frac{\sqrt{\log 2(n-t)}}{n}\) for these experiments, since the other outlier detection methods we use for comparison do not propose a way to perform parameter tuning. We will explore the performance of the proposed algorithm for parameter selection in the next subsection.

6.1.1 Number of bugs versus different measurements

Our first experiment involves four different datasets with different values of n and \(c_t\). We track the performance of the three assumptions (Assumptions 13) and the subset/exact recovery rates, which measure the fraction of experiments which result in subset/exact recovery. The first dataset is generated using the synthetic mechanism described at the beginning of Sect. 6, with \(p=15\). The other three datasets are chosen from the UCI Machine Learning Repository: Combined Cycle Power Plant,Footnote 1 temperature forecast,Footnote 2 and YearPredictionMSD.Footnote 3 They are all associated to regression tasks, with varying feature dimensions (4, 21, and 90, respectively). In the temperature forecast dataset, we remove the attribute of station and date from the original dataset, since they are discrete objects. For each of the UCI datasets, after randomly picking n data points from the entire data pool, we normalize the subsampled dataset according to \(X_{\cdot ,j} = \frac{X_{\cdot ,j} - \frac{1}{n}\sum _{i \in [n]} X_{i,j}}{std[X_{\cdot ,j} ]}\), where std represents the standard deviation.

Fig. 1
figure 1figure 1

Five Measurements on Four Datasets. Three different n’s are of values \(5p,\ 20p\), and 100p. The variance \(\sigma\) is set to 0.1. The tuning parameter is set to \(\lambda = 2\frac{\sqrt{\log 2(n-t)}}{n}\). Each dot is an average value of 20 random trials

The results are displayed in Fig. 1. For the minimum eigenvalue assumption, a key observation from all datasets is that the minimum eigenvalue becomes larger (improves) as n increases, and becomes smaller as \(c_t\) increases. For the mutual incoherence assumption, the synthetic dataset satisfies the condition with less than 15% outliers. The Combined Cycle Power Plant dataset has mutual incoherence close to 1 when \(c_t\) is approximately 20%-25%, and the mutual incoherence condition of the YearPredictionMSD dataset approaches 1 when \(c_t\) is approximately 5%. Therefore, we see that the validity of the assumption highly depends on the design of X. For the gamma-min condition, as \(c_t\) increases, we need more obvious (larger \(\min _i|\gamma ^*_i|\)) outliers. Finally, with larger n and smaller \(c_t\), the subset/exact recovery rate improves.

6.1.2 Effectiveness for recovery

The second experiment compares our debugging method to other proposed methods in the robust statistics literature. We compare our method with the Fast LTS (Rousseeuw and Van Driessen 2006), E-lasso (Nguyen and Tran 2013), Simplified \(\varTheta\)-IPOD (She and Owen 2011), and Least Squares methods. E-lasso is similar to our formulation, except it includes an additional penalty with \(\beta\). The Simplified \(\varTheta\)-IPOD method iteratively uses hard thresholding to eliminate the influence of outliers. For the experimental setup, we generate synthetic data with \(n = 2000, t = 200, p = 15\), and \(\sigma = 0.1\), but replace step [S4] by one of the following mechanisms for generating \(\gamma ^*\):

  1. 1.

    We generate \(\gamma ^*_i, i \in T\) by \(Bernoulli(\pm 1,0.5) \cdot (10\sqrt{\log (2n) \sigma } + Unif(0,10))\).

  2. 2.

    We generate \(\beta '\) elementwise from \(Unif(-10,10)\) and take \(\gamma ^*_i = x_i^\top (\beta '-\beta ^*), i \in T\).

The first adversary is random, whereas the second adversary aims to attack the data by inducing the learner to fit another hyperplane. The precision/recall for Fast LTS and Least Squares are calculated by running the method once and applying various thresholds to clip \(\widehat{\gamma }\). For the other three methods, we apply different tuning parameters, compute precision/recall for each result, and finally combine them to plot a macro precision-recall curve.

In the left panel of Fig. 2, Least Squares and Fast LTS reach perfect AUC, while the other three methods have slightly lower scores. In the right panel of Fig. 2, we see that debugging, E-lasso, and Fast LTS perform comparably well, and slightly better than Simplified \(\varTheta\)-IPOD. Not surprisingly, Least Squares performs somewhat worse, since it is not a robust procedure.

Fig. 2
figure 2

Precision Recall Curves over Different Regression Methods. The two plots correspond to the two settings described in the text for generating \(\gamma ^*\). To better view the curves, we only show the dots for every c positions, where c is an interger and different for different methods

6.2 Tuning parameter selection

We now present two experimental designs for tuning parameter selection. The first experiment runs Algorithm 1 for both one- and two-pool cases. We will present the recovery rates for a range of n’s and \(c_t\)’s, showing the effectiveness of our algorithm in a variety of situations. The second experiment compares Algorithm 1 in one- and two-pool cases to cross-validation, which is a popular alternative for parameter tuning. Our results indicate that Algorithm 1 outperforms cross-validation in terms of support recovery performance.

We begin by describing the method used to generate the second data pool. Given the first data pool (Xy) and the ground-truth parameters \((\beta ^*, \sigma )\), we describe two pipelines to generate the second pool. The first pipeline checks m random points of the first pool, with steps [T1-T3]:

  1. T1

    Select m points uniformly at random from the first pool to construct \(\widetilde{X}\) for the second pool.

  2. T2

    Generate \(\widetilde{\epsilon }\in \mathbb {R}^m\), where each entry \(\widetilde{\epsilon }_i\) is drawn i.i.d. from \({\mathcal {N}}(0,\sigma ^2/L)\).

  3. T3

    Generate the labels by \(\widetilde{y}= \widetilde{X}\beta ^*+ \widetilde{\epsilon }\).

When the debugger is able to query features of clean points from a distribution \({\mathcal {P}}_X\), we can use a second pipeline, where [T1] is replaced by [T1’]:

  1. T1’

    Independently draw m points from \({\mathcal {P}}_X\) to construct \(\widetilde{X}\).

6.2.1 Verification of Algorithm 1

We use the default procedure for generating the synthetic dataset, with parameters \(p = 15\), \(\sigma =0.1\), and \(t = c_t n\), where \(c_t\) ranges from 0.05 to 0.4 in increments of 0.05. In all cases, we input \({\bar{c}} = 0.2\) and \(\lambda _u = \frac{2\Vert P_{X}^\perp y \Vert _\infty }{n}\) in Algorithm 1.

Fig. 3
figure 3

Exact Recovery Rate over 20 Trials. The recovery rate is shown in different cases varying by fraction of outliers \(c_t\) and n. The left subfigure is for one-pool case and the right subfigure is for two-pool case. We set \(m = 100, L = 5\) for the second pool

Figure 3 displays the results for \(n \in \{1,2,3,4,5,10,20,30\}\cdot 10^3\). First, we see that Algorithm 1 achieves exact support recovery in all 20 trials in the yellow area. Second, the exact recovery rate increases with increasing n and decreasing \(c_t\), showing that the algorithm is particularly useful for large-scale data sets. This trend can also be seen from the requirement on n imposed in Theorem 3. In particular, we see that the contour curve for the exact recovery rate matches the curve of \(\left( 1-c_t\right) ^{-\frac{1}{1-2c_n}}\) for some constant \(c_n \in (0,\frac{1}{2})\). However, a downside of Algorithm 1 is that it does not fully take advantage of the second pool in the two-pool case, as the left panel and the right panel display similar results.

6.2.2 Effectiveness of tuning parameter selection

We now compare our method for tuning parameter selection to cross-validation. We also use the postprocessing step described at the beginning of the section. Four measurements are presented, including two recovery rates, the \(\ell _2\)-error of \(\widehat{\beta }\), and the runtime. In both the one- and two-pool cases, we use our default methods for generating synthetic data, and we set \({\bar{c}} = 0.2\) for all the experiments.

The cross-validation method for the one-pool case splits the dataset into training and testing datasets with the ratio of 8:2, then selects \(\lambda\) with the smallest test error, \(\Vert X_{test} \widehat{\beta }- y_{test}\Vert _2\). The procedure for the two-pool case is to run the Lasso-based debugging method with a list of candidate \(\lambda\)’s and test it on the second pool. Finally, we select the \(\lambda\) value with the smallest test error, \(\Vert \widetilde{X}\widehat{\beta }- \widetilde{y}\Vert _2\). We use 15 candidate values for \(\lambda\), spaced evenly on a log scale between \(10^{-6}\) and \(\lambda _u = \frac{2\Vert P_{X}^\perp y \Vert _\infty }{n}\).

Figure 4 compares the results in the one-pool case. We note that cross-validation does not perform very well for all the measurements except \(\Vert \widehat{\beta }- \beta ^*\Vert _2\). Specifically, it does not work at all for subset support recovery, since cross-validation tends to choose very small \(\lambda\) values. For the \(\ell _2\)-error, we see that for small values of \(c_t\), our algorithm can select a suitable choice of \(\lambda\), so that after removing outliers, we can fit the remaining points very well. This is why the debugging + postprecessing methods gives the lowest error. As \(c_t\) increases, our debugging method shows poorer performance in terms of support recovery, resulting in larger \(\ell _2\)-error for \(\widehat{\beta }\). Although cross-validation seems to perform well, carefully designed adversaries may still destroy the good performance of cross-validation, since its test dataset could be made to contain numerous buggy points.

Fig. 4
figure 4

Effectiveness of Tuning Parameter Selection (One Pool). Each dot is the average result of 20 random trials. We set \(n = 2000, p = 15\), and \(\sigma =0.1\)

Fig. 5
figure 5

Effectiveness on Tuning Parameter Selection (Two Pools). Each dot is the average result of 20 random trials. We set \(n = 1000, p = 15, t = 100, L = 5\), and \(\sigma =0.1\)

Figure 5 displays the results for the two-pool experiments, which are qualitatively similar to the results of the one-pool experiments. We emphasize that our method works well for support recovery; furthermore, the methods exhibit comparable performance in terms of the \(\ell _2\)-error. The slightly larger error of our debugging method can be attributed to the bias which arises from using an \(\ell _1\)-norm instead of an \(\ell _0\)-norm.

6.3 Experiments with clean points

We now focus on debugging methods involving a second clean pool. We have three experimental designs: First, we study the influence of m on support recovery. Second, we compare debugging with alternative methods suggested in the literature. Third, we study the performance of our proposed MILP debugger, where we compare it to three other simple strategies. Different strategies for selecting clean points correspond to changing step [T1] in the setup described above.

6.3.1 Number of clean points versus exact recovery

In this subsection, we present two experiments involving synthetic and YearPredictionMSD datasets, respectively, to see how m affects the exact recovery rate. Recall that the pipeline for generating the first pool is described at the beginning of Sect. 6. For the second pool, we use steps [T’1, T2, T3] for the synthetic dataset, where we assume \({\mathcal {P}}_X\) is standard Gaussian. We take steps [T1–T3] for YearPredictionMSD to check the sample points in the first pool.

Fig. 6
figure 6

Minimal Gamma versus Exact Recovery Rate on Synthetic Data. We run 50 trials for each dot and compute the average

Recall that the YearPredictionMSD dataset is designed to predict the release year of a song from audio features. The dataset consists of 515,345 songs, each with 90 audio features. Therefore, for both experiments, we set \(n = 500, t = 50, p = 90, \sigma = 0.1\), and \(L = 10\), and take \(\lambda = 2.5\frac{\sqrt{\log (n-t)}}{n}\).

From Fig. 6, we see that the phenomena are similar for the two different design matrices. In particular, increasing the number of clean points helps with exact recovery. For instance, in the left subfigure, for \(m = 0\), when \(\min _i |\gamma ^*_i| > 2.9\), the exact recovery rate goes to 1. For \(m = 100\), the exact recovery rate goes to 1 when \(\min _i |\gamma ^*_i| > 2.4\). Also, the slope of the curve for larger m is sharper. Thus, adding a second pool helps relax the gamma-min condition.

6.3.2 Comparisons to methods with clean points

In this experiment, we compare the debugging method for two pools with other methods suggested by the machine learning literature. We generate synthetic data using the default first-pool setup with \(n = 1000, p = 15, t = 100\), and \(\sigma =0.1\), and we run [T1–T3] to generate the second pool using different values of m. For our proposed debugging method, we use Algorithm 1 to select the tuning parameter. We compare the following methods: (1) debugging + postprocessing, (2) least squares, (3) simplified noisy neural network, and (4) semi-supervised eigvec. The least squares solution is applied using \(\left\{ \begin{pmatrix} X \\ \widetilde{X}\end{pmatrix}, \begin{pmatrix} y \\ \widetilde{y}\end{pmatrix}\right\}\).

The simplified noisy neural network method borrows an idea from Veit et al. (2017), which is designed for image classification tasks for a datasets with noisy and clean points. This work introduced two kinds of networks and combines them together: the “Label Cleaning Network,” used to correct the labels, and the “Image Classifier,” which classifies images using CNN features as inputs and corrected labels as outputs. Each of them is associated with a loss, and the goal is to minimize the sum of the losses. Let \(w\in \mathbb {R}, \beta _1 \in \mathbb {R}^d\), and \(\beta _2 \in \mathbb {R}^d\) be the variables to be optimized. For our linear regression setting, we design the “Label Cleaning Network” by defining \({\hat{c}}_i = y_i w - x_i^\top \beta _1\) as the corrected labels for both noisy and clean datasets. Then we define the loss \({\mathcal {L}}_{clean} = \sum _{i \in cleanset} |{\tilde{y}}_i - y_i w - x_i^\top \beta _1|\). The “Image Classifier” is modified to the regression setting using predictions of \(x_i^\top \beta _2\) and the squared loss. Therefore, the classification loss can be formalized as \({\mathcal {L}}_{classify} = \sum _{i \in cleanset} (x_i^\top \beta _2 - {\tilde{y}}_i)^2 + \sum _{i \in noisy set} (x_i^\top \beta _2 - {\hat{c}}_i)\). Together, the optimization problem becomes

$$\begin{aligned} \min _{{\mathop {w \in \mathbb {R}}\limits ^{\beta _1 \in \mathbb {R}^d, \beta _2 \in \mathbb {R}^d}}} \sum _{i \in clean set} \{(x_i^\top \beta _2 - {\tilde{y}}_i)^2 + |{\tilde{y}}_i - w y_i - x_i^\top \beta _1|\} +\sum _{i \in noisy set} (x_i^\top \beta _2 - w y_i - x_i^\top \beta _1)^2. \end{aligned}$$

We use gradient descent to do the optimization, and initialize it with \(w = 0\) and \(\beta _1 = \beta _2 = \beta _{ls}\). The optimizer \(\widehat{\beta }_2\) is used for further predictions. We then calculate \(\widehat{\gamma }= y - X\widehat{\beta }_2\). For gradient descent, we will validate multiple step sizes and choose the one with the best performance on the squared loss of the clean pool.

The method “semi-supervised eigvec” is from Fergus et al. (2009), and is designed for the semi-supervised classification problem. It also contains an experimental setting that involves noisy and clean data. To further apply the ideas in our linear regression setting, we make the following modifications: Define the loss function as

$$\begin{aligned}J(f) = f^\top L f + \left( f-\begin{pmatrix} y \\ {\tilde{y}} \end{pmatrix}\right) ^\top \varLambda \left( f-\begin{pmatrix} y \\ {\tilde{y}} \end{pmatrix}\right) ,\end{aligned}$$

where \(L = D-W (\varepsilon )\) is the graph Laplacian matrix and \(\varLambda\) is a diagonal matrix whose diagonal elements are \(\varLambda _{ii} = \lambda\) for clean points and \(\varLambda _{ii} = \frac{\lambda }{c}\) for noisy points. In the classification setting, \(f\in \mathbb {R}^{n+m}\) is to be optimized. The idea is to constrain the elements of f by injecting smoothness/similarity using the Laplacian matrix L. Since we assume the linear regression model, we can further plug in \(f = \begin{pmatrix} X \\ {\tilde{X}}\end{pmatrix}\beta\). Our goal is then to estimate \(\beta\) by minimizing \(J(\beta )\). As suggested in the original paper, we use the range of values \(\varepsilon \in [0,1,1,5], c \in [1,10,50]\), and \(\lambda \in [1,10,100]\). We will evaluate all 36 possible combinations and pick the one with the smallest squared loss on the clean pool.

Fig. 7
figure 7

Comparison to Methods involving Clean Points. Each dot is the average result of 20 random trials. We use the synthetic data setting, with \(n = 500, p = 15, \sigma =0.1,t = 0.1n\), and \(\min _i |\gamma ^*_i| = 10 \sqrt{\log 2n} \sigma\). The clean data pool is randomly chosen from the first pool without replacement; we query the labels of these chosen points

The results are shown in Fig. 7. We observe that only the debugging method is effective for support recovery, as we have carefully designed our method for this goal. The method from Veit et al. (2017) works best in terms of \(\ell _2\)-error of \(\beta\), especially when m is large. The semi-supervised method, like least squares, does not perform well, possibly because it does not consider replacing/removing the influence of the noisy dataset.

6.3.3 Effectiveness on second pool design

We now provide experiments to investigate the design of the clean pool, corresponding to Sect. 5. We use the Concrete Slump dataset,Footnote 4 where \(p = 7\). We limit our study to small datasets, since the runtime of the MILP optimizer is quite long. We report the performance of the MILP debugging method in both noiseless and noisy settings. In our experiments, we compare the performance of the MILP debugger to a random debugger and a natural debugging method: adding high-leverage points into the second pool. In other words, D.milp selects m clean points to query from running the MILP (18); D.leverage selects the m points with the largest values of \(x_i^\top (X^\top X)^{-1}x_i\); and D.random randomly chooses m points from the first pool without replacement. After choosing the clean pool, the debugger applies the Lasso-based algorithm. In Zhang et al. (2018), all the second pool points are chosen either randomly or artificially. Therefore, we may consider D.random as an implementation of the method in Zhang et al. (2018), which will be compared to our D.milp.

In the noiseless setting, we define \(\beta ^*\) to be the least squares solution computed from all data points. We randomly select n data points as the \(x_i\)’s. For D.milp and D.leverage, since the bug generator knows their strategies or the selected D, it generates bugs according to the optimization problem (17). Let \(T \subseteq [n]\) be the index set of the t largest \(|u_i|\)’s, for \(i = 1,\dots , n\). The bug generator takes \(\gamma ^*_T = u_T\) if the solution u is nonzero, and otherwise randomly generates a subset T of size t to create \(\gamma ^*_T = \mathbf {1}\). Thus, \(y_i = x_i^\top \beta ^*+ \gamma ^*\). For D.onepool, the bug generator follows the above description with \(D = \emptyset\). The orange bars indicate whether the bug generator succeeds in exact recovery in the one-pool case. For D.random, the bug generator generates bugs using the same mechanism as for D.onepool. Note the above bug generating methods are the “worst” in the sense of signed support recovery: The debuggers run (14) using their selected \(X_D\). From Fig. 8, there is an obvious advantage of D.milp over D.onepool and D.leverage. This suggests improved performance of our MILP algorithm. D.random is sometimes successful even when n and t are small because the bug generator cannot control the randomness, but it performs worse than D.milp overall.

Fig. 8
figure 8

Comparison between D.milp and other debugging strategies in noiseless settings. Each setting is an average over 50 random trials

In the noisy setting, we define \(\beta ^*\) to be the least squares solution computed using the entire data set. We randomly select n data points as the \(x_i\)’s. For D.milp and D.leverage, since the bug generator knows their strategies or the selected D, it generates bugs via the optimization problem (17): taking \(\gamma ^*_T = u_T\) if the solution u is nonzero for T being the indices of the largest t elements of |u|, and otherwise randomly generating a subset T of size t to create \(\gamma ^*_T = \mathbf {1}\). Thus, \(y_i = x_i^\top \beta ^*+ \gamma ^*+ {\mathcal {N}}(0,0.01)\). Note that having \(\gamma ^*_T = u_T\) if the solution u is nonzero gives incorrect signed support recovery, which is proved in Appendix E.1. This is related to what we have claimed in Remark 2 above. For D.onepool, the bug generator follows the above description with \(D = \emptyset\). The orange bars indicate whether the bug generator succeeds in exact recovery in the one-pool case. For D.random, since it is not deterministic, the bug generator does not know D and acts in the same way as in the one-pool case. Note that the above bug generating methods are the “worst” in the sense of signed support recovery. From Fig. 9, there is an obvious advantage of D.milp over D.onepool and D.leverage. Our theory only guarantees the success of D.milp in the noiseless setting, so the experimental results for the noisy setting are indeed encouraging.

Fig. 9
figure 9

Comparison between MILP Strategy and Others. In each setting, we run 20 random simulations

Debugging in practice: The algorithm for minimax optimization has been executed by running all \(n\atopwithdelims ()m\) possible choices of clean points for the outer loop; for each outer loop, we then run the inner maximization. For optimal debugging in practice, i.e., nt, and m being large, some recent work provides methods for efficiently solving the minimax MILP (Tang et al. 2016). Note that the MILP debugger can be easily combined to other heuristic methods: one can run the MILP, and if there is a nonzero solution, we can follow it to add clean points. Otherwise, we can switch to other methods, such as choosing random points or high-leverage points.

7 Conclusion

We have developed theoretical results for machine learning debugging via M-estimation and discussed sufficient conditions under which support recovery may be achieved. As shown by our theoretical results and illustrative examples, a clean data pool can assist debugging. We have also designed a tuning parameter algorithm which is guaranteed to obtain exact support recovery when the design matrix satisfies a certain concentration property. Finally, we have analyzed a competitive game between the bug generator and the debugger, and analyzed a mixed integer optimization strategy for the debugger. Empirical results show the success of the tuning parameter algorithm and proposed debugging strategy.

Our work raises many interesting future directions. First, the question of how to optimally choose the weight parameter \(\eta\) remains open. Second, although we have mentioned several efficient algorithms for bilevel mixed integer programming, we have not performed a thorough comparison of these algorithms for our specific problem. Third, although our MILP strategy for second pool design has been experimentally found to be effective in a noisy setting, we do not have corresponding theoretical guarantees. Fourth, our proposed debugging strategy is a one-shot method, and designing adaptive methods for choosing the second pool constitutes a fascinating research direction. Finally, the analysis of our tuning parameter algorithm suggests that a geometrically decreasing series might be used as a grid choice for more general tuning parameter selection methods, e.g., cross validation—in practice, one may not need to test candidate parameters on a large grid chosen linearly from an interval. Lastly, it would be very interesting to extend the ideas in this work to regression or classification settings where the underlying data do not follow a simple linear model.

The supplmentary materials is organized as follows: Sect. A presents some additional discussions on \(\beta \). Sections B, C, D and E mainly provide proofs respectively for problem reformulation and support recovery, tuning parameter selection and strategy for second pool selection. They may also include additional discussions and formal statements as referred in the main text.