Next Article in Journal
Structural Panel Bayesian VAR with Multivariate Time-Varying Volatility to Jointly Deal with Structural Changes, Policy Regime Shifts, and Endogeneity Issues
Previous Article in Journal
Multidimensional Arrays, Indices and Kronecker Products
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Outliers in Semi-Parametric Estimation of Treatment Effects

by
Gustavo Canavire-Bacarreza
1,
Luis Castro Peñarrieta
1,2,* and
Darwin Ugarte Ontiveros
1,3
1
Centro de Investigaciones Económicas y Empresariales (CIEE), Universidad Privada Boliviana, La Paz, Bolivia
2
División de Economía, Centro de Investigación y Docencia Económicas, A.C. (CIDE), Aguascalientes CP20313, Mexico
3
Banco Central de Bolivia (BCB), La Paz, Bolivia
*
Author to whom correspondence should be addressed.
Econometrics 2021, 9(2), 19; https://doi.org/10.3390/econometrics9020019
Submission received: 17 December 2020 / Revised: 11 March 2021 / Accepted: 20 March 2021 / Published: 30 April 2021

Abstract

:
Outliers can be particularly hard to detect, creating bias and inconsistency in the semi-parametric estimates. In this paper, we use Monte Carlo simulations to demonstrate that semi-parametric methods, such as matching, are biased in the presence of outliers. Bad and good leverage point outliers are considered. Bias arises in the case of bad leverage points because they completely change the distribution of the metrics used to define counterfactuals; good leverage points, on the other hand, increase the chance of breaking the common support condition and distort the balance of the covariates, which may push practitioners to misspecify the propensity score or the distance measures. We provide some clues to identify and correct for the effects of outliers following a reweighting strategy in the spirit of the Stahel-Donoho (SD) multivariate estimator of scale and location, and the S-estimator of multivariate location (Smultiv). An application of this strategy to experimental data is also implemented.
JEL Classification:
C21; C14; C52; C13

1. Introduction

Treatment effect techniques are the workhorse tool when examining the causal effects of interventions, i.e., whether the outcome for an observation is affected by the participation in a program or policy (treatment). As Bassi (1983, 1984); Hausman and Wise (1985) argue, counterfactual estimates are precise when using randomized experiments; yet, when looking at non-randomized experiments, there are a number of assumptions, such as unconfoundedness, exogeneity, ignorability, or selection on observables, that should be considered before estimating the true effect, or to get close to that of a randomized experiment Imbens (2004)1.
While the above assumptions are usually considered when identifying treatment effects (see King et al. (2017)), one that has been overlooked in the existing literature is the existence of outliers (on both the outcome and the covariates).2 A number of issues arise from the existence of outliers, for example these may bias or modify estimates of priority interest, and in our case, the treatment effect (see some discussion in Rasmussen (1988); Schwager and Margolin (1982); and Zimmerman (1994)); they may also increase the variance and reduce the power of methods, especially those within the non-parametric family. If non-randomly distributed, they may reduce normality, violating in the multivariate analyses the assumption of sphericity and multivariate normality, as noted by Osborne and Overbay (2004).
To the best of our knowledge, the effects of outliers on the estimation of semi-parametric treatment effects have not yet been analyzed in the literature. The only reference is Imbens (2004), who directly associates outlying values in the covariates to a lack of overlap. Imbens (2004) argues that outlier observations will present estimates of the probability of receiving treatment close to 0 or 1, and therefore methods dealing with limited overlap can produce estimates approximately unchanged in bias and precision. As shown in this paper, this intuition is valid only for outliers that are considered good leverage points. Moreover, Imbens (2004) argues that treated observations with outlying values may lead to biased covariate matching estimates, because these observations would be matched to inappropriate controls. Control observations with outlying covariate values, on the other hand, will likely have little effect on the estimates of average treatment effect for the treated, since such observations are unlikely to be used as matches. We provide evidence for this intuition.
Thus, in this paper, we examine the relative performance of semi-parametric estimators of average treatment effects in the presence of outliers. Three types of outliers are considered: bad leverage points, good leverage points, and vertical outliers. The analysis considers outliers located in the treatment group, the control group, and in both groups. We focus on (i) the effect of these outliers in the estimates of the metric, propensity score, and Mahalanobis distance; (ii) the effect of those metrics contaminated by outliers in the matching procedure when finding counterfactuals; and (iii) the effect of these matches on the estimates of the average treatment effect on the treated (TOT).
Using Monte Carlo simulations, we show that the semi-parametric estimators of average treatment effects produce biased estimates in the presence of outliers. Our findings show that, first the presence of bad leverage points (BLP) yield a bias of the estimators of average treatment effects. The bias emerges because this type of outlier completely changes the distribution of the metrics used to define good counterfactuals, and therefore changes the matches that had initially been undertaken, assigning as matches observations with very different characteristics. This effect is independent of the location of the outlier observation. Second, good leverage points (GLP) in the treatment sample slightly bias estimates of average treatment effects, because they increase the chance of violating the overlap condition. Third, good leverage points in the control sample do not affect the estimates of treatment effects, because they are unlikely to be used as matches. Fourth, these outliers distort the balance of the covariates criterion used to specify the propensity score. Fifth, vertical outliers in the outcome variable greatly bias estimates of average treatment effects. Sixth, good leverage points can be identified visually by looking at the overlap plot. Bad leverage points, however, are masked in the estimation of the metric and are, as a consequence, practically impossible to identify unless a formal outlier identification method is implemented.
To identify outliers we suggest two strategies that are robust, one based on the Stahel (1981) and Donoho (1982) estimator of scale and location, proposed in the literature by Verardi et al. (2012) and the other proposed by Verardi and McCathie (2012). What we suggest is thus identifying all types of outliers in the data by this method and estimating treatment effects again, down-weighting the importance of outliers; this is a one-step re-weighted treatment effect estimator. Monte Carlo simulations support the utility of these tools for overcoming the effects of outliers in the semi-parametric estimation of treatment effects.
An application of these estimators to the data of LaLonde (1986) allows us to understand the failure of the matching estimators of Dehejia and Wahba (1999, 2002) to overcome LaLonde’s critique of non-experimental estimators. We show that the large bias, when considering LaLonde’s full sample, of Dehejia and Wahba (1999, 2002), which was criticized by Smith and Todd (2005) can be explained by the presence of outliers in the data. When the effect of these outliers is down-weighted, the matching estimates of Dehejia and Wahba (1999, 2002) approximate the experimental treatment effect of LaLonde’s sample.
This paper is structured as follows: Section 2 briefly reviews the literature. Section 3 defines the balancing hypothesis, the semi-parametric estimators, and the types of outliers considered, as well as the S-estimator and the Stahel-Donoho estimator of location and scatter tool to detect outliers. In Section 4, the data generating process (DGP) is characterized. The analysis of the effects of outliers is presented in Section 5. An application to LaLonde’s data is presented in Section 6, and in Section 7, we conclude.

2. A Brief Review of the Literature

Different methodologies for identifying outliers have been proposed in the literature; using statistical reasoning (Hadi et al. (2009)); distances (Angiulli and Pizzuti (2002); Knorr et al. (2000); Orair et al. (2010)), and densities (Breunig et al. (2000)); (De Vries et al. (2010); Keller et al. (2012)). However, the issue has not been completely resolved and this issue may become especially troublesome, because the problem increases as outliers often do not show up by simple visual inspection or by univariate analysis, and in the case several outliers are grouped close together in a region of the sample space, far away from the bulk of the data, they may mask one another (see Rousseeuw and Van Zomeren (1990)).
To illustrate the outliers problem, in a labor market setting such as that employed by Ashenfelter (1978) and Ashenfelter and Card (1985), consider a case in which the path of the data clearly shows that highly educated people attend a training program and uneducated individuals do not. Now assume a small number of individuals without schooling are participating in the program and a small number of educated individuals are not, while having similar remaining characteristics. Enrolled individuals with an outstanding level of education may represent good leverage points. However, both small groups mentioned above, on the other hand, may constitute bad leverage points in the treatment and control sample, respectively. Regardless of whether this small number of individuals genuinely belong to the sample or are errors from the data encoding process, they may have a large influence on the treatment effect estimates and drive the conclusion about the impact of the training program for the entire sample, as pointed out by Khandker et al. (2009) and Heckman and Vytlacil (2005). The problem considered in this paper is that as semi-parametric techniques, matching methods rely on a parametric estimation of the metrics (propensity score and Mahalanobis distance) used to define and compare observations with similar characteristics in terms of covariates, while the relationship between the outcome variables and the metric is nonparametric. Therefore, the presence of multivariate outliers in the data set can yield a strong bias of the estimators of the metrics and lead to unreliable treatment effect estimates. According to the findings of Rousseeuw and Van Zomeren (1990), vertical outliers can also bias the nonparametric relationship between the metric and the outcome by distorting the average outcome in either the observed or counterfactual group. Moreover, these distortions, caused by the presence of multivariate outliers in the data set, can distort the balance of the covariates when specifying the propensity score, as in Dehejia (2005). These issues have practical implications: when choosing the variables to specify the propensity score, it may not be necessary to discard troublesome but relevant variables from a theoretical point of view or generate senseless interactions or nonlinearities. It might be sufficient to discard troublesome observations (outliers). That is, outliers can push practitioners to unnecessarily misspecify the propensity score.

3. Framework

To examine the effects of outliers on the average treatment effect we identify ouliers and implement a correction in the spirit of Verardi et al. (2012); Stahel (1981) and Donoho (1982), and the other one in the spirit of Verardi and McCathie (2012). Next, we briefly describe the outlier classification criteria, the re-weighting methods and a description of the matching methods used.
(i) Classification of outliers: Semi-parametric estimators of treatment effects may be sensitive to outliers. To understand the sources of bias caused by outliers, we employ the simple cross-section regression analysis. Rousseeuw and Leroy (2005) argue that bias may be of three kinds, the error term (vertical outliers) and the explanatory variables (good and bad leverage points) (see Figure 1). Vertical outliers (VO) are those observations that are far away from the bulk of the data in the y-dimension, but that present a behavior similar to the group in the x-dimension. In the treatment effects framework, these would be outliers on the outcomes of study. Good leverage points (GLP), on the other side, are observations that are far from the bulk of the data in the x-dimension (i.e., outlying in the covariates), but are aligned with the treatment effect. These outliers go in the same direction of the cloud of data and the treatment; thus, they do not affect the estimates, but can affect the inference and induce a type I or type II error when testing the estimates. Finally, bad leverage points (BLP) are observations that are far from the bulk of the data in the x-dimension and are located away from the treatment; these covariates may affect the estimates.3
(ii) A reweighted estimator: What we suggest for coping with the effect of these outliers is to identify all types of outliers in the data and down-weight their importance (a one-step reweighted treatment effect estimator). Here, we suggest two measures of multivariate location and scatter to identify outliers, one is the S-estimator of Verardi and McCathie (2012) (Smultiv), the other follows Verardi et al. (2012) and apply, as an outlier identification tool, the projection-based method of Stahel (1981) and Donoho (1982), hereafter called SD. Once outliers have been identified, we propose a reweighting scheme where any outlier is given a weight of zero. The optimization problem of the S-estimator is to minimize the sum of the squared distance between each point and the center of the distribution. In a multivariate setting, to obtain a multivariate estimator of location, Verardi and McCathie (2012) depart from the squared distance used in the Mahalanobis distance and replace the square function with an alternative function called ρ to obtain robust distances.4
The Stahel (1981) and Donoho (1982) estimation of location and scatter (SD) consists of calculating the outlyingness of each point by projecting the data cloud unidimensionally in all possible directions and estimating the distance from each observation to the center of each projection. The degree of outlyingness is defined as the maximal distance that is obtained when considering all possible projections. Since this outlyingness distance ( δ ) is distributed as χ p 2 , we can choose a quantile above which we consider an observation to be outlying (we consider here the 95th percentile).5
An interesting feature of these methodologies is that unlike with other multivariate tools for identifying outliers, like the minimum covariance determinant estimator (MCD), dummies are not a problem.6 This feature is important, because we are considering treatment effects and the main variable of interest is a dummy ( T i ) . Moreover, the presence of categorical explanatory variables in treatment effects empirical research is extremely frequent. The advantage of the SD tool is its geometric approach: in regression analysis, even if one variable is always seen as dependent on others, geometrically there is no difference between explanatory and dependent variables and the data thus form a set of points ( Y i , T i , X i ) in a ( p + 2 ) dimensional space. Thus, an outlier can be seen as a point that lies far away from the bulk of the data in any direction. Note that the utility of these tools is not restricted to treatment effect models; it can be implemented to detect outliers in a broad range of models (see Verardi et al. (2012) for some applications).
Once the outliers have been identified by either the S-estimator or the SD, we propose to re-weight outlier observations to estimate the treatment effect. In this paper, we use the most drastic weighting scheme, which consists of awarding a weight of 0 to any outlying observation. Once the importance awarded to outliers is down-weighted, the bias coming from outliers will disappear.7
(iii) Matching methods: In the setup, we rely on the traditional potential outcome approach developed by Rubin (1974), which views causal effects as comparisons of potential outcomes defined for the same unit.8
We examine the effect of outliers on the following matching estimators: propensity-score pair matching, propensity-score ridge matching, reweighting based on propensity score, and bias-corrected pair matching. Large sample properties for these estimators have been advanced by Heckman et al. (1997a); Hirano et al. (2003); Abadie and Imbens (2006). Pair matching proceeds by finding for each treated observation ( X i ) a control observation ( X j ) with a very similar value of the metric m ( X ) (propensity score or Mahalanobis distance). Ridge matching (Seifert and Gasser (2000)) is a variation of kernel matching based on a local linear regression estimator that adds a ridge term to the denominator of the weight W i , j given to matched control observations in order to stabilize the local linear estimator. To estimate it, we consider the Epanechnikov kernel. The bias-corrected pair matching estimator attempts to remove the bias in the nearest-neighbor covariate matching estimator coming from inexact matching in finite samples. It adjusts the difference within the matches for the differences in their covariate values. This adjustment is based on regression functions (see Abadie and Imbens (2011) for details).

4. Monte Carlo Setup

We examine the effects of outliers through Monte Carlo simulations and implementing both methods to identify outliers, the S-estimator (Smultiv) and the SD. The Monte Carlo data generation process we employ is as follows:
T i = 1 ( T i * > 0 ) T i * = f ( X i ) + μ i Y i = τ T i + γ X i + ε i
where μ i N ( 0 , 1 ) and ε i N ( 0 , 1 ) are independent of X i N ( 0 , 1 ) and of each other. The sample size used is 1000, the number of covariates is p { 2 , 10 } , and 2000 replications are performed. The experiment is designed to detect the effect of outliers on the performance of various matching estimators and a benchmark scenario is considered that sidesteps possible sources of bias in the estimation, like poor overlap in the metrics between treatment and control units, misspecification of the metric, etc. The idea is to see how outliers can move us away from this benchmark case. The design of the Monte Carlo study consists of two parts, (i) the functional form and distribution of the metric in the treated and control groups and (ii) the kind of outlier contaminating the data.
Following Frölich (2004), the propensity score is linked, through T * , to the linear function f ( X i ) = α + β X i , and through the choice of different values for α different ratios of control to treated observations are generated.9 The parameter α manages the average value of the propensity score and the number of treated relative to the number of controls in the sample. Four designs are considered; in the first (for p = 2 ), f ( X i ) = 0.5 X 1 + 0.5 X 2 , the population mean of the propensity score is 0.5 .10 That is, the expected ratio of control to treated observations is 1:1. In the second, f ( X i ) = 0.7 + 0.5 X 1 + 0.5 X 2 , the ratio is 3:7 (the pool of treated observations is large), and in the third, f ( X i ) = 0.7 + 0.5 X 1 + 0.5 X 2 , the ratio is 7:3 (the controls greatly exceed the treated). We include these first three designs because during the estimation of the counterfactual mean, more precisely during the matching step, the effects of outliers in the treated or control groups could be offset by the number of observations in this group. The fourth design considers treatment and control groups of equal size, but with a nonlinear specification of the propensity score on the covariate of interest, f ( X i ) = 0.5 X 1 + 0.15 X 1 2 + 0.5 X 2 . In addition, Y i = 0.15 + T i + 0.5 X 1 + 0.5 X 2 , that is, the true treatment effect is 1. In the DGP we do not consider different functional forms for the conditional expectation function of Y i , given T i . Results from Frölich (2004) suggest that when the matching estimator uses the average, the effects of these nonlinearities may disappear.
The probability of assignment is bounded away from 0 and 1, ς < P ( X i ) P ( T i = 1 | X i ) < 1 ς , for some ς > 0 , also known as strict overlap assumption, is always satisfied in these designs. Following Khan and Tamer (2010), this is a sufficient assumption for n -consistency of semi-parametric treatment effect estimators. Busso et al. (2009) shows this when X i and μ i are standard normal distributed for the linear specification of f ( X i ) . This assumption is achieved when | β | 1 . The intuition behind this result is that when β approaches 1, an increasing mass of observations have propensity scores near either 0 and 1. This leads to fewer and fewer comparable observations and an effective sample size smaller than n. Significantly, this implies potentially poor finite sample properties of semi-parametric estimators in contexts where β is near 1. In our designs, we set β = 0.5 for the linear and nonlinear functions of f ( X i ) . The overlap plots support the achievement of the strict overlap assumption for these cases, because they do not display mass near the corners. This can be observed in Figure 2, where the conditional density of the propensity score given treatment status (overlap plot) for the four designs considered in the Monte Carlo simulations are displayed.11
The second part of the design concerns the type of contamination in the sample. To grasp the influence of the outliers, we consider three contamination setups inspired by Croux and Haesbroeck (2003). The first is clean with no contamination. In the second, mild contamination, 5% of X 1 are awarded a value 1.5 p units larger than what the DGP would suggest. In the third, severe contamination, 5% of X 1 are awarded a value 5 p units larger than the DGP would suggest. As mentioned above, three types of outliers are recognized in the literature: bad leverage points, good leverage points, and vertical outliers. Nine additional scenarios can thus be considered in the analysis, depending on the localization of these outliers in the sample. That is, the three types can be located in the treatment sample (T), in the control sample (C), and in both groups (T and C). Therefore, we assess the relative performance of the estimators described in the last section in a total of 72 different contexts. These different contexts are characterized by combinations of four designs for f ( X i ) , two types of contamination (mild and severe), and three types of outliers located in treatment, control, and in both groups, respectively.

5. The Effect of Outliers in the Estimation of Treatment Effects

5.1. The Effect of Outliers in the Metrics

Based on the artificial data set, this section presents the results by means of two simple cases, the effect of outliers in the estimation of the metrics used to define similarity and the effect of these (spurious) metrics in the assignment of matches when finding counterfactuals.
  • (a) The distribution of the propensity score in presence of outliers
Using the artificial data set, we graphically show some stylized facts of the distribution of the propensity score with and without outliers. The original distribution of the propensity score by treatment status (overlap plot) corresponds to the thin lines in the left graph of Figure 3, while the thick solid lines in the left graph represent the overlap plots for the same sample, but with 5% of the data contaminated by bad leverage points in the treatment sample. As can be seen, the propensity scores are now clearly less spread out than those obtained with the original data in both the treatment and control groups. In the right graph of Figure 3, the straight line corresponds to the values of the original propensity score, whereas the cloud of points corresponds to the values of the propensity score in the presence of bad leverage points in the treatment sample; large differences in the values of the propensity score between the original and the contaminated sample are clear. Note that these effects are identical if we consider bad leverage points either in the control sample or in both the treatment and control groups.
The distribution of the propensity score by treatment status in the presence of good leverage points in the treatment sample, in the control, and in both samples can be see in the top three graphs of Figure 4. In the bottom graphs, the straight line corresponds to the values of the original propensity score, whereas the cloud of points corresponds to the values of the propensity score in the presence of good leverage points. As can be observed, in contrast to bad leverage points, the good leverage points do not completely change the distribution of the propensity score.
A theoretical explanation for these results can be found in Croux et al. (2002), who showed that the non-robustness against outliers of the maximum likelihood estimator in binary models is demonstrated because it does not explode to infinity as in ordinary linear regressions, but rather implodes to zero when bad leverage outliers are present in the data set. That is, given the maximum likelihood estimator of a binary dependent variable,
β ^ M L = arg max β L o g L ( β ; X n )
where L o g L ( β ; X n ) is the log-likelihood function calculated in β . Croux et al. (2002) presented two important facts: (i) good leverage points (GLP) do not perturb the fit obtained by the ML procedure, that is β M L G L P β M L . However, as displayed in Figure 4, the fitted probabilities of these outlying observations will be close to 0 or 1. Here, it can lead to unstable estimates of the treatment effects, because the support (or overlap) condition is not met; and (ii) in the presence of bad leverage points (BLP), the ML estimator never explodes, instead asymptotically tending to 0. That is, β M L B L P 0 . In addition, following Frölich (2004) and Khan and Tamer (2010), coefficients close to zero in the estimation of the propensity score will then reduce the variability of the propensity score, as these coefficients ( β ) determine the spread of the propensity score. Therefore, the presence of bad leverage points in the data will always narrow the distribution of the propensity score, as shown in Figure 3. As is shown below, this tightness in the distribution of the propensity score may increase the chance of matching observations with very different characteristics.
To validate these stylized facts, we applied 2000 simulations for the different scenarios of our Monte Carlo setup. Table 1 provides the results in four panels, each corresponding to a scheme with different outlier contamination levels (mild and severe) and different number of covariates ( p = 2   and   p = 10 ) . The sample size is 1000 observations and the specification of the propensity score corresponds to the first design, that is, as a linear function with equal number of observations in both treatment and control groups.12 This implies that we first obtain the propensity score. Once the propensity score is calculated, it is possible to characterize it through its mean, variance, kurtosis, skewness. It is also possible to calculate the correlation between the propensity score in the clean scenario and the propensity score of any of the contaminated scenarios.13 Thus, the growth in variance is the percentage change of the variance of the propensity score in a specific scenario vs. the variance in the clean scenario; for example, for the BLP in the treatment group, this would mean that the variance of the propensity score of the contaminated data is 11.0 % lower than in the clean data. The same reasoning applies to the growth in kurtosis. This reinforces what was observed in the left panel of Figure 3 where we can see that the variance of the contaminated data is lower and it is more leptokurtic.
The loss in correlation corresponds to the change in correlation between the propensity score of a given scenario vs. clean data. That is, for the BLP in the treatment scenario, the correlation of the propensity score between the contaminated data and clean data is 7.9% lower with respect to the clean data. That is, the correlation of the clean propensity score with itself is 100%, while the correlation between the contaminated propensity score and the clean propensity score is 92.1%, representing a loss in correlation of 7.9%.
The results are summarized in two sections of Table 1. The first presents estimates of the average absolute value of the coefficient of interest ( β ) and its mean squared error (MSE) multiplied by a factor of 1000, and the second presents the characteristics of the propensity score: its average value, variance growth, kurtosis, and loss of the correlation, all four of them compared to their baseline scenario values. The columns represent different degrees of outlier contamination.
Regarding the parameter β in the specification of the propensity score, the true value is 0.5; the Clean column confirms that the simulation results for the baseline scenario are extremely close to the true parameter. The remaining columns show that in presence of bad leverage points, the coefficient diverges from the true value, tending towards 0. By contrast, if the variable contains good leverage points, the regression coefficient remains unbiased. The behavior in each case is independent of the location of the leverage points. When focusing on the characteristics of the propensity score, several features stand out. First, outliers do not change the center of the propensity score distribution, since the mean is identical across different contamination scenarios. Second, the form of the distribution is affected, because bad leverage points shrink the density of the propensity score. This is evidenced by the reduction in the variance and the rise in the kurtosis when this type of outlier is present in the data. As can be seen in Table 1, this effect is larger when the outlyingness level is greater and when lesser covariates are present in the estimation of the propensity score. Theoretically, this result is in line with Frölich (2004) and Khan and Tamer (2010), who suggest that coefficients close to 0 in the estimation of the propensity score, β M L B L P 0 , reduce the variability of this metric, as these coefficients ( β ) determine the spread of the propensity score.
Third, good leverage points have the opposite effect on the shape of the propensity score—that is, the dispersion increases and the kurtosis is reduced. These variations are however lower compared to those of bad leverage points. This happens because good leverage points nearly attain the extreme values of the propensity score (0 or 1), without significantly changing the complete distribution of this metric. Fourth, the propensity score contaminated by both type of outliers is less related to the score corresponding to the baseline scenario. This is revealed by the reduction of the linear correlation between the baseline propensity score and the score with outliers. This reduction in the linear association is strong with bad leverage points and weak with good leverage points in the data. All these effects remain independent of the outlier location.
The effect of these distortions in the density of the propensity score in the matching process and in the treatment effect estimation is discussed in following sections.
  • (b) The distribution of the Mahalanobis distance in presence of outliers
As in the case of the propensity score matching, the left side of Figure 5 shows the the distributions of the Mahalanobis distance without outliers and in the presence of bad leverage points. In the right graph, the straight line corresponds to the values of the Mahalanobis distance computed with the clean data, whereas the cloud of points corresponds to the values of this metric in the presence of bad leverage points in the treatment group; the blue observations correspond to non-outlier observations while the red ones are the outliers.
Three remarks can be made on the basis of these graphs. First, good and bad leverage points present atypical behavior in the sense that they display larger distances. Since Mahalanobis distances are computed individually for each observation, bad and good leverage points present bigger values, whereas the remaining observations stay relatively stable. This behavior is independent of the location of the outlier. Second, while bad and good leverage points slightly change the distribution of the distances, the stability of the not contaminated observations is relative, in the sense that all distances are standardized by the sample covariance matrix of the covariates ( S 1 ) , which is in turn based on biased measures (due to the outliers) of the averages and variances in the sample. Third, concluding that observations with large distances can directly be called outliers may be fallacious, just in the sense that to be called outliers these distances need to be estimated by a procedure that is robust against outliers in order to provide reliable measures for the recognition of outliers. This is the masking effect Rousseeuw and Van Zomeren (1990). Single extreme observations or groups of observations, departing from the main data structure, can have a heightened influence on this distance measure, because the covariance ( S 1 ) is estimated in a non-robust manner, that is, it is biased.
Table 2 reports the simulations for the behavior of the Mahalanobis distance under the different contaminated-data scenarios. The outlier intensity is displayed in panels A to D and the outlier location is displayed in the columns. In each panel, the average of the distances, the growth of the Mahalanobis distance variance, asymmetry, kurtosis, and linear correlation with respect to the clean data scheme are displayed. The results correspond to 2000 simulations for the first design of our Monte Carlo setup.14 We turn first to the results in the center of the distribution: as can be seen, no leverage point changes the mean of the Mahalanobis distances, whereas outliers distort the shape of this metric distribution.
Managed by the extreme values of the distances corresponding to the outlier observations, the variance increases abruptly and the distribution skews positively and turns more leptokurtic, as shown in Figure 5. These characteristics are independent of the location of outliers. In addition, as displayed in the last row of each panel in Table 2, the distances in presence of outliers are significantly less linearly related to the original Mahalanobis distances.
Overall, the results of this section suggest the distribution of the Mahalanobis distance becomes narrower and with long tails in the presence of bad and good leverage points in the data. This effect is a combination of two factors: outlier observations are associated with the largest values of the distances and all distances change as they are standardized with a covariance matrix distorted by outliers. This behavior of the Mahalanobis distance suggests a way to detect the presence of outliers in the sample.

5.2. The Matching Process in the Presence of Outliers, a Toy Example

To illustrate the effect of outliers on the assignment of matches when finding counterfactuals we use a small data set consisting on fifteen normally distributed observations for the first design of our DGP are generated. These variables are presented in columns 1–4 of Table 3. The exercises consist on artificially substituting the value of one observation in one covariate and observing in detail its effect on the matches assigned. One bad and one good leverage point is generated by moving the value of the first treated observation of X 1 by 5 2 and by + 5 2 , respectively (severe contamination). Columns 5 to 7, and 11 to 13 of Table 3 present the propensity score and the Mahalanobis distance estimated with the original and contaminated data. As can be seen, the distribution of the propensity score and the Mahalanobis distance with bad leverage points completely changes. Observations 2 and 3, for example, change their probability of participating in the program from 0.78 to 0.47 and from 0.99 to 0.55 when looking at the propensity score matching and from 0.51 to 0.02 and from 4.40 to 0.90 when examining the Mahalanobis distance.
The distribution of the propensity score with good leverage points keeps to the same path. Columns 8 to 10 show the consequent effect of the variation in this metric on the matches assigned to generate the counterfactuals (by using the nearest neighbor criteria).15 Consider observations 5 and 6, for example. Initially, observations 14 and 12 are presented as counterfactual observations, but due to the presence of the bad leverage point, the nearest observation now corresponds to observation 9 in both cases. The matches assigned in the presence of good leverage points are the same as the original data. Columns 11 to 13 show the behavior of the Mahalanobis distance. Unlike the propensity score matching, the toy example shows that there are also significant changes when examining good leverage points in the Mahalanobis distance. Thus, the presence of outliers may distort even more the treatment effects when using this estimand.
For a proper estimation of the unobserved potential outcomes, we want to compare treated and control groups that are as similar as possible. These simple illustrations explain that extreme values can easily distort the metrics used to define similarity and thus may bias the estimates of treatment effects by making the groups very different. That is, the prediction of Y ^ i 0 for the treated group is made using information from observations that are different from themselves. In the next section, we present evidence about the effects on the treatment effect estimation.

5.3. The Effect of Outliers on the Different Matching Estimators

In this section we examine the effect of outliers on the estimation of treatment effects under different scenarios and through a set of matching estimators. Table 4 shows the performance in the estimation of the average treatment effect on the treated of the four selected estimators for the first design of our DGP. It presents the bias and the mean squared error, from 2000 replications. The sample size ( n ) is 1000 and the number of covariates is p { 2 , 10 } . The four panels resemble the panels presented in Table 1. However, in this case, columns correspond to the type of outlier and rows to the estimators. Column 1, called clean, involves the no-contamination scenario. Columns 2–4 contain bad leverage points in the treatment, control, and both groups simultaneously, respectively. Similarly, columns 5–7 consider good leverage points, whereas columns 8–10 correspond to vertical outliers in the treatment, control, and in both samples, respectively. To compute the bias, we first, calculated the bias of the estimate, for example pair matching, by subtracting the true effect off the treatment estimator; that is: b i a s = ( β T O T p a i r m a t c h i n g 1 ) , the same was done for each of the scenarios and the result (scaled by 1000) can be found in Table 4.16
The results suggest several important conclusions. First, in the absence of outliers all the estimators we considered perform well, which is in accordance with the evidence provided by Busso et al. (2009) and Busso et al. (2014); however, there is a larger bias in the estimate when more covariates are present, regardless of the level of contamination. The bias-corrected covariate matching of Abadie and Imbens (2011) has the smallest bias, followed by the local linear ridge propensity score matching, the pair matching estimator, and the reweighting estimator based on the propensity score. Second, in the presence of bad leverage points, all the estimates present a considerable bias. For the propensity score matching methods, the size of the bias is generally the same, independent of the location of the outlier. This is expected because, as explained in the last section, the complete distribution of the metrics changes when bad leverage points exist in the data. The spread of the metrics decreases and observations that initially presented larger (lower) values of the metric may now match with observations that initially had lower (larger) values. Therefore, for pair matching the spurious metric will match inappropriate controls. For local linear ridge matching, the weights W i , j , which are a function (kernel) of the differences in the propensity score, will decrease notably. In the case of the reweighted estimator, some control observations will receive higher weights, because their propensity score values are higher than those values from the original data, and some will receive lower weights (as the weights are normalized to sum up to 1). Finally, for the bias corrected pair matching, there is a clear difference for the clean data, since the correction reduces the bias of the clean data, however, the bias correction does not reduce the bias in contaminated groups.
For the multivariate distance matching estimators using Mahalanobis distance, results are very similar and can be found in Table A1 in Appendix A. Treatment observations with bad leverage points bias the treatment effect estimates as the distribution of the distances changes completely. Moreover, outlier observations present larger values for the metric and are matched to inappropriate controls. Bad leverage points in the control sample have little effect on the estimates of average treatment effect for the treated, because the distribution of the distances changes completely, but outlier observations are less likely to be considered as counterfactuals.
Third, good leverage points in the treatment sample also bias the treatment effect estimates of the propensity score matching estimators. Good leverage points in the treatment sample have estimates of the probability of receiving treatment close to 1. These treated observations with outlying values lack suitable controls against which to compare them. This violates the overlap assumption and therefore increases the likelihood of biasing the matching estimates. In the case of the reweighted estimator, the unbiasedness is explained as just the outliers receiving higher weights, while remaining observations present almost the same weight (slightly modified by a normalization procedure). Moreover, good leverage points in the treatment group yield a great bias of the covariate matching estimator. This effect, which is similar to those coming from bad leverage points, is explained by these outlying observations having larger values for the metric and therefore being matched to inappropriate controls.
Fourth, good leverage points in the control sample do not affect matching methods. For the propensity score matching estimates, the values of the propensity score for the outliers are close to 0; these observations would cause little difficulty, because they are unlikely to be used as matches. For the reweighted estimator, these outlying observations would get close to zero weighting. For the covariate matching estimators, good leverage points in the control sample have little effect on the estimates, because such observations are less likely to be considered as counterfactuals. Fifth, when good leverage points are presented in both samples, treatment effect estimates are biased. This bias probably comes from the outliers in the treatment group. Sixth, vertical outliers bias the treatment effect estimates. This bias is easy to understand, because extreme values in the outcomes, Y i 1 or Y i 0 , will move the average values toward them in their respective groups, independent of the estimator used to match the observations. Seventh, the immediate effect of outliers is to reject the balancing hypothesis. It is noteworthy that very similar results are obtained when estimating the Average Treatment Effect (ATE). Results can be found Table A2 in Appendix A.17

5.4. Outliers and Balance Checking

Achieving covariate balance is very important, because it is what justifies ignorability of the observed covariates, allowing for a valid causal inference after the treatment effect estimation (Imbens (2004)). Thus, once the observations are matched, it is important to assess the quality of the matches in order to ensure the control group has a distribution of the metric similar to the treatment group. Among the approaches to analyzing the balancing hypothesis are those presented by Dehejia and Wahba (2002) and Stuart et al. (2013); however, it is a common practice to report as statistical measures the standardized mean difference and the variance ratio. To calculate these measures for the contaminated variable, as well as for the remaining variables, we obtained the estimate of TOT for nearest neighbor with 1 neighbor and bias adjusted. Then, for each covariate, the standardized differences (bias) and variance ratio are calculated between treated and control groups; a perfectly balanced covariate would have a difference in means of zero and a variance ratio of one. For both bad leverage points and good leverage points, the standardized differences for the contaminated variable is very similar with the clean scenario; however, the variance ratio increases in most contaminated scenarios with the exception of outliers in the control group, since they would simply not be used in the matching process. For the remaining uncontaminated covariates, the bias and the variance ratio are close to the clean scenario. Moreover, the bias and variance ratio are consistently larger when there is a large number of covariates, regardless of the severity of the contamination.
Table 5, shows the percentage bias and the variance ratio calculated for the baseline (clean) data and for different contaminated scenarios, which are displayed in columns. These results correspond to the first design of our Monte Carlo setup.

5.5. Solving the Puzzle, Re-Weighting Treatment Effects to Correct for Outliers

Finally, Table 6 analyzes the effectiveness of the re-weighted treatment effect estimator based on the Verardi and McCathie (2012) Smultiv tool for the identification of outliers.18 The structure of the table is similar to that of Table 4. The results suggest that first, the re-weighting algorithms perform well in an scenario without outliers, that is, applying the algorithms does not influence the estimates of treatment effects if no outliers are present in the data.19 Second, as expected, the re-weighted estimators we propose are unaffected by the presence of outliers and lead to estimates that are similar to those obtained with the clean sample in all contamination scenarios. Third, and perhaps the most important conclusion, while both algorithms perform well reducing the bias that outliers impose, both in the case of the propensity score and the Mahalanobis distance, the Smultiv outperforms the SD. This holds for most of the different matching estimators implemented in this paper.
Similar to the conclusions of Table 4, there is a clear difference in the performance of the estimators when comparing two covariates vs. ten covariates; regardless of the level of contamination, there is a larger bias and MSE with ten covariates due to the noise introduced by the number of covariates.20 It is worth mentioning that the general conclusions obtained with designs two to four are very similar, although the effect of outliers is slightly smaller with design four. These results are available upon request.
Additionally, we compare the results from the re-weighting algorithms to a “naive approach”, where outliers are observations that lie outside 3 standard deviations from the mean to extend this strategy to a multivariate setting. We find that both algorithms outperform the naive approach in almost all scenarios. The only scenario when there is equal performance of the naive approach with the SD algorithm is when there is severe contamination and ten covariates. We believe this is the case due to the large number of covariates and the noise they introduce to the data; in all other cases, the algorithms outperform the naive approach. Results for this naive approach can be found in Table A5 in Appendix A.

6. An Analysis of the Dehejia-Wahba (2002) and Smith-Todd (2005) Debate with Regard to Outliers

A debate has arisen, starting with LaLonde (1986), concerning the evaluation of the performance of non-experimental estimators using experimental data as a benchmark. The findings of Dehejia and Wahba (1999, 2002) of low bias from applying propensity score matching to the data of LaLonde (1986), which suggested it was a good way to deal with the selection problem, contributed strongly to the popularity of this method in the empirical literature. However, Smith and Todd (2005) (hereafter called ST), using the same data and model specification as Dehejia and Wahba (hereafter called DW), suggest that the low bias estimates presented in DW are quite sensitive to the sample and the propensity score specification, and thus claimed that matching methods do not solve the evaluation problem when applied to LaLonde’s data.21
In this section, we suggest that the DW propensity score model’s inability to approximate the experimental treatment effect when applied to LaLonde’s full sample is managed by the existence of outliers in the data. When the effect of these outliers is down-weighted, the DW propensity score model presents low bias. Note that we do not interpret these results as proof that propensity score matching solves the selection problem, because the third subsample (ST sample) continues to report biased matching estimates after down-weighting the effect of outliers. Moreover, these data allow us to highlight the role of outliers when performing the balance of the covariate checking in the specification of the propensity score. Dehejia (2005), in a reply to ST, argues that a different specification should be selected for each treatment group/comparison group combination, and that ST misapplied the specifications that DW selected for their samples to samples for which the specifications were not necessarily appropriate “as covariates are not balanced”. Dehejia (2005) states that with suitable specifications selected for these alternative samples and with well-balanced covariates, accurate estimates can be obtained. Remember that in estimating the propensity score the specification is determined by the need to condition fully on the observable characteristics that make up the assignment mechanism. That is, the distribution of the covariates should be approximately the same across the treated and comparison groups, once the propensity score is controlled for. The covariates can be defined as well-balanced when the differences in propensity score for treated and comparison observations are insignificant (see the appendix in Dehejia and Wahba (2002)).
ST suggests that matching fails to mitigate LaLonde’s critique of non-experimental estimators, because matching produces a large bias when applied to LaLonde’s full sample. Dehejia (2005), on the other hand, states that this failing comes from the use of an incorrect specification of the propensity score for that sample (as the covariates are not balanced). In this section, we suggest that matching has low bias when applied to LaLonde’s full sample and that the specification of the propensity score employed was not wrong—rather, the issue is that the sample was contaminated with outliers. These outliers initially distorted the balance of the covariates, leading Dehejia (2005) to conclude that the specification was not right, and also biased the estimates of the treatment effect, causing ST to conclude that matching does not approximate the experimental treatment effect when applied to LaLonde’s full sample. These conclusions can be found in Table 7, which shows the propensity score nearest neighbor treatment effect estimations (TOT) for DW’s subsample and LaLonde’s full sample.22 The dependent variable is real income in 1978. Columns 1 and 2 describe the sample, that is, the comparison and treatment groups, respectively. Column 3 reports the experimental treatment effect for each sample.23 Column 4 presents the treatment effect estimates for each sample. The specification of the propensity score corresponds to that used by Dehejia and Wahba (1999, 2002), and Smith and Todd (2005).24 Column 5 and 6 report the treatment effect estimates for each sample by using the same specification as in column 4 and down-weighting the effect of outliers identified by the Stahel-Donoho and the Smultiv methods described in Section 3. Three remarks can be made on the basis of the results presented in Table 7. First, the treatment effect estimates for LaLonde’s sample (in column 4) are highly biased, compared to the true effects (column 3), as shown by DW. Second, once the outliers are identified and their importance down-weighted, the treatment effect estimates improve meaningfully in terms of bias, and the matching estimates approximate the experimental treatment effect when LaLonde’s full sample is considered. And third, once the effect of outliers is down-weighted, the propensity score specifications now balance the covariates successfully. This has practical implications: when choosing the variables to specify the propensity score, it may not be necessary to discard troublesome variables that may be relevant from a theoretical point of view or to generate senseless interactions or nonlinearities. It might be sufficient to discard troublesome observations (outliers). That is, outliers can push practitioners to misspecify the propensity score unnecessarily.

7. Conclusions

Assessing the impact of any intervention requires making inferences about the outcomes that would have been observed for program participants, had they not participated. Matching estimators impute the missing outcome by finding other observations in the data with similar covariates, but that were exposed to the other treatment. The criteria used to define similar observations, the metrics, are parametrically estimated by using the predicted probability of treatment (propensity score) or the standardized distance of the covariates (Mahalanobis distance).
Moreover, it is known that in statistical analysis the values of a few observations (outliers) often behave atypically from the bulk of the data. These atypical few observations can easily drive the estimates in empirical research.
In this paper, we examine the relative performance of leading semi-parametric estimators of average treatment effects in the presence of outliers. First, we find that bad leverage points bias estimates of average treatment effects. This type of outlier completely changes the distribution of the metrics used to define good counterfactuals and therefore changes the matches that had initially been undertaken, assigning as matches observations with very different characteristics. Second, good leverage points in the treatment sample slightly bias estimates of average treatment effects and increase the chance of infringing on the overlap condition. Three, good leverage points in the control sample do not affect the estimates of treatment effects, because they are unlikely to be used as matches. Four, these outliers violate the balancing criterion used to specify the propensity score. Five, vertical outliers in the outcome variable greatly bias estimates of average treatment effects. Six, good leverage points can be identified visually by looking at the overlap plot. Bad leverage points, however, are masked in the estimation of the metric and are difficult to identify. Seven, the Stahel (1981) and Donoho (1982) estimator of scale and location, proposed by Verardi et al. (2012) (SD) and Verardi and McCathie (2012) (Smultiv) as tools to identify outliers, are effective for this purpose; however, we find evidence that the Smultiv outperform the SD. Finally, an application of this estimator to the data of LaLonde (1986) allows us to understand the effects of outliers in an quasi-experimental setting.

Author Contributions

Conceptualization, G.C.-B., L.C.P., and D.U.O.; methodology, G.C.-B., L.C.P., and D.U.O.; software, G.C.-B., L.C.P., and D.U.O.; formal analysis, G.C.-B., L.C.P., and D.U.O.; writing—original draft preparation, G.C.-B., L.C.P., and D.U.O.; writing—review and editing, G.C.-B., L.C.P., and D.U.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Simulated bias and MSE of Average Treatment Effect on the Treated (TOT) estimates in the presence of outliers using Mahalanobis distance.
Table A1. Simulated bias and MSE of Average Treatment Effect on the Treated (TOT) estimates in the presence of outliers using Mahalanobis distance.
Panel A: Severe contamination, ten covariates (p = 10)
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Pair Matching1086.221223.631119.441172.591157.351178.441168.452668.76−489.121088.07
Ridge M. Epan842.51720.66919.31843.54693.92996.84895.422425.10−740.40839.84
IPW299.37634.05636.05632.63299.32300.98297.691881.92−1264.44309.81
Pair Matching (bias corrected)−0.30784.72−4.07390.26−794.54−2.97−398.781582.25−1561.615.80
MSE (×1000)
Pair Matching1195.581512.291269.621390.481354.941405.091380.867140.02389.791270.85
Ridge M. Epan728.11545.59864.04731.77510.251012.31821.865901.36736.89810.76
IPW164.17459.52464.34458.64164.85172.71167.733619.122173.43421.42
Pair Matching (bias corrected)33.63647.7734.49179.25705.1434.03206.162539.833115.53385.02
Panel B: Severe contamination, two covariates (p = 2)
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Pair Matching30.91208.7536.59122.37−44.7538.48−2.37739.06−677.6532.78
Ridge M. Epan10.3218.9813.3611.95−29.6515.1813.39718.08−698.2811.13
IPW16.56366.56366.76366.7013.1514.5214.14724.71−694.3318.71
Pair Matching (bias corrected)−0.45352.343.28176.75−351.720.43−175.19707.70−709.091.41
MSE (×1000)
Pair Matching11.9858.2412.9726.7018.8212.9712.71557.67501.6728.60
Ridge M. Epan8.299.418.898.629.038.878.62524.74513.9617.84
IPW11.37140.60140.75140.7211.4412.3211.91536.91512.4820.91
Pair Matching (bias corrected)11.53144.0512.2643.13162.9912.1649.71512.82549.0929.90
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Pair Matching1086.221159.951082.641121.851095.431130.021112.921560.98613.621086.77
Ridge M. Epan842.51834.56841.42837.11756.97924.32844.291317.29367.63841.70
IPW299.37553.26555.45557.44288.31289.99287.53774.14−169.77302.51
Pair Matching (bias corrected)−0.30233.96115.19163.37−242.974.65−118.03474.47−468.691.53
MSE (×1000)
Pair Matching1195.581360.561188.791274.271215.941293.031254.262452.46403.961202.47
Ridge M. Epan728.11715.45727.87720.04593.99873.91732.861753.60167.95733.91
IPW164.17361.29367.50367.74160.30168.08163.66674.21147.06188.24
Pair Matching (bias corrected)33.6381.7149.1256.58102.0935.6351.16259.04311.6064.86
Panel D: Mild contamination, two covariates (p = 2)
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Pair Matching30.91130.59170.19164.75−38.4620.82−8.39243.35−181.6631.47
Ridge M. Epan10.32107.76151.50141.35−50.88−1.65−21.12222.65−202.2610.57
IPW16.56166.69166.07168.93−16.22−14.88−13.95229.01−196.7117.21
Pair Matching (bias corrected)−0.45105.94156.04148.67−104.89−12.24−58.01212.00−213.040.11
MSE (×1000)
Pair Matching11.9827.1739.5937.5115.4112.1412.5070.2746.8613.48
Ridge M. Epan8.2919.2131.0327.8011.838.779.2257.8250.739.11
IPW11.3735.1436.0736.4312.8313.3813.0563.6251.7612.18
Pair Matching (bias corrected)11.5321.6335.2032.7126.9912.5116.9656.5160.1013.21
Note: The results use the Mahalanobis distance metric and 2000 replications. The statistics presented are the bias and MSE of each estimator (Pair matching, Ridge, IPW and bias corrected Pair matching) scaled by 1000. The bias is calculated by subtracting the true effect of the treatment (1) from the estimate of TOT. Each column represents a contamination type and placement: Clean; BLP; GLP; and vertical outliers, in treatment group (T), in control group (C) and both. Panels A through D represent different combinations of contamination levels and number of covariates.
Table A2. Simulated bias and MSE of Average Treatment Effect (ATE) estimates in the presence of outliers using propensity score.
Table A2. Simulated bias and MSE of Average Treatment Effect (ATE) estimates in the presence of outliers using propensity score.
Panel A: Severe contamination, ten covariates (p = 10)
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Pair Matching157.33555.59554.08549.96104.26106.62102.911766.01−1425.13178.94
Ridge M. Epan147.88515.87513.57513.06104.64104.81109.831755.33−1429.67173.25
IPW302.65638.80638.05637.00305.49303.68303.621900.85−1270.70320.41
Pair Matching (bias corrected)−1.46610.43613.04463.74−400.41−397.05−397.351599.17−1577.6615.82
MSE (×1000)
Pair Matching71.59343.59342.04337.3770.1672.5069.813603.222511.61556.46
Ridge M. Epan61.16293.04291.05290.1859.0859.9160.063463.702415.09436.35
IPW127.18438.55437.76436.00129.61129.60128.993804.201777.43290.78
Pair Matching (bias corrected)33.22400.58402.54240.62236.94233.46224.632918.212843.96389.60
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Pair Matching6.77365.94367.44363.69−52.71−53.61−52.90715.82−704.419.01
Ridge M. Epan−1.26361.99362.35361.20−39.69−38.78−1.32707.99−710.250.59
IPW15.14366.01365.67365.8811.7412.4512.60724.58−694.0816.83
Pair Matching (bias corrected)−2.53357.08359.43358.67−178.27−180.27−179.55706.65−713.44−0.06
MSE (×1000)
Pair Matching8.93141.38142.58139.6316.6116.5715.16532.34517.6621.31
Ridge M. Epan6.56137.26137.50136.7311.4511.326.99513.77517.4012.91
IPW7.49139.29139.04139.227.777.717.69537.06494.2912.61
Pair Matching (bias corrected)8.98134.52136.12135.9753.0553.9748.70519.86530.7621.61
Panel C: Mild contamination, ten covariates (p = 10)
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Pair Matching157.33463.05462.51465.78106.34109.70105.85639.93−317.41163.81
Ridge M. Epan147.88421.79420.59423.28105.66106.37109.36630.12−325.38155.49
IPW302.65557.12555.97558.83294.72293.01293.63782.11−169.35307.98
Pair Matching (bias corrected)−1.46308.67305.55352.88−120.19−119.79−118.44478.73−474.323.72
MSE (×1000)
Pair Matching71.59247.91247.32249.9870.0972.0268.68492.73188.26116.46
Ridge M. Epan61.16203.29202.74204.0758.9059.8759.55465.08175.9896.51
IPW127.18339.96338.72342.15123.97124.03123.85659.3275.26143.62
Pair Matching (bias corrected)33.22119.82118.23149.9058.4858.2956.14289.87288.9164.86
Panel D: Mild contamination, two covariates (p = 2)
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Pair Matching6.77166.49165.31176.44−43.12−43.16−42.81219.49−206.587.44
Ridge M. Epan−1.26155.43153.36160.93−42.85−42.41−39.68211.52−213.96−0.70
IPW15.14166.26164.94167.79−17.68−16.96−15.55227.97−197.6315.64
Pair Matching (bias corrected)−2.53153.08151.58173.31−61.94−62.04−60.94210.22−215.81−1.79
MSE (×1000)
Pair Matching8.9335.4535.2339.2112.6412.9812.2758.0752.7910.18
Ridge M. Epan6.5630.3429.6732.109.139.168.7251.8552.987.20
IPW7.4933.4333.0634.048.578.428.3559.6846.867.94
Pair Matching (bias corrected)8.9831.1130.8138.0814.9315.2414.3154.2556.7810.23
Note: The results use the propensity score metric and 2000 replications. The statistics presented are the bias and MSE of each estimator (Pair matching, Ridge, IPW and bias corrected Pair matching) scaled by 1000. The bias is calculated by subtracting the true effect of the treatment (1) from the estimate of TOT. Each column represents a contamination type and placement: Clean; BLP; GLP; and vertical outliers, in treatment group (T), in control group (C) and both. Panels A through D represent different combinations of contamination levels and number of covariates.
Table A3. Simulated bias and MSE of Average Treatment Effect on the Treated (TOT) estimates in the presence of outliers with different number of matching neighbors.
Table A3. Simulated bias and MSE of Average Treatment Effect on the Treated (TOT) estimates in the presence of outliers with different number of matching neighbors.
Panel A: Severe contamination, ten covariates (p = 10)
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Nearest-Neighbor matching (M = 1)155.72536.54568.73547.4243.91162.42100.701738.27−1425.95160.84
Nearest-Neighbor matching (M = 5)241.55595.71623.69607.70156.59252.63203.821824.10−1333.56245.78
Nearest-Neighbor matching (bias corrected, M = 1)−1.48785.56433.26461.78−797.19−4.43−398.521581.07−1570.689.77
Nearest-Neighbor matching (bias corrected, M = 5)−3.95787.35433.81465.12−807.07−3.41−406.211578.59−1557.958.98
MSE (×1000)
Nearest-Neighbor matching (M = 1)112.73353.53384.21361.60134.13121.73122.593113.713808.921005.98
Nearest-Neighbor matching (M = 5)98.98390.64421.90403.3876.79106.4188.373370.402524.73469.56
Nearest-Neighbor matching (bias corrected, M = 1)61.73669.61234.13258.86857.3765.18283.122564.563774.03735.33
Nearest-Neighbor matching (bias corrected, M = 5)49.99663.59224.75253.07848.1453.36270.562544.563446.37557.36
Panel B: Severe contamination, two covariates (p = 2)
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Nearest-Neighbor matching (M = 1)9.13367.76367.39364.36−112.2210.93−50.59717.27−705.4712.48
Nearest-Neighbor matching (M = 5)21.41370.20370.50370.52−77.8923.46−25.64729.56−687.3824.62
Nearest-Neighbor matching (bias corrected, M = 1)−0.05354.12363.51359.18−353.721.14−176.80708.10−714.093.67
Nearest-Neighbor matching (bias corrected, M = 5)0.50354.69364.89363.61−355.221.02−175.60708.65−707.784.04
MSE (×1000)
Nearest-Neighbor matching (M = 1)13.17145.73145.98143.3642.9714.2822.53528.13557.9136.83
Nearest-Neighbor matching (M = 5)9.42143.94144.39144.4120.0110.3212.02541.71504.4221.16
Nearest-Neighbor matching (bias corrected, M = 1)13.17134.87143.09139.57184.5214.3059.37515.14571.0137.00
Nearest-Neighbor matching (bias corrected, M = 5)9.24132.05140.45139.38154.8010.1147.13511.91535.1321.87
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Nearest-Neighbor matching (M = 1)155.72446.02478.72465.7049.10163.02104.01630.48−318.78157.26
Nearest-Neighbor matching (M = 5)241.55501.66529.56519.54158.63249.52203.33716.31−230.98242.82
Nearest-Neighbor matching (bias corrected, M = 1)−1.48236.68376.49350.68−240.05−2.05−118.73473.28−472.241.89
Nearest-Neighbor matching (bias corrected, M = 5)−3.95232.97374.53348.82−246.06−4.13−124.34470.81−470.15−0.07
MSE (×1000)
Nearest-Neighbor matching (M = 1)112.73262.10289.80278.70130.69120.45119.56486.58342.40193.33
Nearest-Neighbor matching (M = 5)98.98283.71312.46302.2076.94104.7587.36553.96158.56132.95
Nearest-Neighbor matching (bias corrected, M = 1)61.73100.48187.36170.06151.1166.2390.72286.13398.70121.23
Nearest-Neighbor matching (bias corrected, M = 5)49.9989.78176.80159.20141.5253.6979.69271.91361.5895.43
Panel D: Mild contamination, two covariates (p = 2)
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Nearest-Neighbor matching (M = 1)9.13127.38204.31175.95−79.71−3.21−40.49221.57−205.2510.13
Nearest-Neighbor matching (M = 5)21.41138.25206.59180.88−57.497.10−24.41233.86−191.2222.38
Nearest-Neighbor matching (bias corrected, M = 1)−0.05105.04199.85172.82−106.54−13.39−58.22212.40−214.261.07
Nearest-Neighbor matching (bias corrected, M = 5)0.50105.91198.37172.22−106.76−15.38−59.53212.95−211.981.57
MSE (×1000)
Nearest-Neighbor matching (M = 1)13.1727.8452.3943.0225.8713.9817.6762.2459.6315.40
Nearest-Neighbor matching (M = 5)9.4226.9649.9040.2515.549.8711.3863.6947.6910.51
Nearest-Neighbor matching (bias corrected, M = 1)13.1722.5350.4941.8931.9314.2419.7458.3563.5315.40
Nearest-Neighbor matching (bias corrected, M = 5)9.2419.1846.5337.2324.9310.4115.0054.6356.5310.39
Note: The results use the propensity score metric and 2000 replications. The statistics presented are the bias and MSE of nearest neighbor and bias corrected nearest neighbor matching using 1 and 5 neighbors, scaled by 1000. The bias is calculated by subtracting the true effect of the treatment (1) from the estimate of TOT. Each column represents a contamination type and placement: Clean; BLP; GLP; and vertical outliers, in treatment group (T), in control group (C) and both. Panels A through D represent different combinations of contamination levels and number of covariates.
Table A4. Simulated bias and MSE of the treatment effect estimates (TOT) using propensity score and reweighted after the SD method.
Table A4. Simulated bias and MSE of the treatment effect estimates (TOT) using propensity score and reweighted after the SD method.
Panel A: Severe contamination, ten covariates (p = 10)
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Pair Matching153.23161.46166.61164.65159.53166.83164.22161.13166.63165.69
Ridge M. Epan143.85151.25161.14155.85150.92160.83155.94151.11161.07155.66
IPW299.39310.16310.98310.18310.06311.04310.27310.22311.04310.06
Pair Matching (bias corrected)−3.42−1.91−0.182.80−3.540.802.34−2.49−0.393.92
MSE (×1000)
Pair Matching110.00114.79122.35120.16114.20122.54119.51114.38123.05120.32
Ridge M. Epan93.2797.21106.22101.6697.08106.01101.6597.18106.12101.62
IPW176.78183.69192.95188.56183.54192.92188.52183.68192.88188.53
Pair Matching (bias corrected)59.2661.0766.6963.5261.1466.7563.7560.6167.2563.39
Panel B: Severe contamination, two covariates (p = 2)
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Pair Matching0.6413.1412.8013.6412.6713.4114.0314.01−4.536.56
Ridge M. Epan−5.815.853.794.625.803.534.506.97−14.32−2.43
IPW5.0017.4417.1517.4417.4617.2017.2518.550.049.96
Pair Matching (bias corrected)−8.054.192.934.243.763.284.435.10−14.81−3.20
MSE (×1000)
Pair Matching13.4814.5914.3314.3214.5314.2314.2714.5615.2114.62
Ridge M. Epan8.699.169.689.329.159.669.309.2610.269.41
IPW10.5111.1312.0211.5611.1411.9911.5411.2411.9311.38
Pair Matching (bias corrected)13.5014.4314.2114.1314.4114.0514.0614.4015.4714.56
Panel C: Mild contamination, ten covariates (p = 10)
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Pair Matching153.35372.82448.04404.0681.00160.58119.21572.72−279.06125.27
Ridge M. Epan142.75341.86407.26368.7386.30155.08125.86560.98−288.90115.28
IPW298.74512.04505.90497.75296.01300.50300.04716.64−132.98273.76
Pair Matching (bias corrected)−6.88159.64332.41286.55−161.82−2.89−74.90412.20−443.64−33.62
MSE (×1000)
Pair Matching112.55214.29267.55234.91125.43120.77117.19421.68296.76184.73
Ridge M. Epan93.86174.62216.32191.14102.58103.93103.34392.40258.90146.59
IPW178.51328.55327.55319.77177.27185.24181.27604.96157.50198.96
Pair Matching (bias corrected)59.4576.73160.90132.77109.0265.4375.90234.83354.43117.85
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Pair Matching−0.21105.36173.82145.27−73.17−9.80−39.24198.55−202.15−6.85
Ridge M. Epan−7.54103.37159.17135.36−66.00−19.15−39.54192.80−211.98−13.43
IPW5.90141.82137.88138.21−20.45−20.49−18.14205.01−199.09−0.81
Pair Matching (bias corrected)−8.7986.96170.39142.58−94.95−19.95−54.20189.74−210.86−15.76
MSE (×1000)
Pair Matching13.7922.6041.5732.6323.5714.1816.9953.7957.7816.10
Ridge M. Epan8.8218.6333.5126.5714.729.6611.0946.2855.229.81
IPW10.7428.3027.6727.7812.1613.0412.4753.1451.3311.32
Pair Matching (bias corrected)13.9619.0140.3931.8427.5914.5818.3950.4161.5216.34
Note: The results use the propensity score metric and 2000 replications. The statistics presented are the bias and MSE of each estimator (Pair matching, Ridge, IPW and bias corrected Pair matching) scaled by 1000 after reweighting using the SD method. The bias is calculated by subtracting the true effect of the treatment (1) from the estimate of TOT. Each column represents a contamination type and placement: Clean; BLP; GLP; and vertical outliers, in treatment group (T), in control group (C) and both. Panels A through D represent different combinations of contamination levels and number of covariates.
Table A5. Simulated bias and MSE of the reweighted treatment effect estimates based on a naive approach.
Table A5. Simulated bias and MSE of the reweighted treatment effect estimates based on a naive approach.
Panel A: Severe contamination, ten covariates (p = 10)
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Pair Matching152.34149.75156.83151.28149.75156.19151.28163.71153.86142.29
Ridge M. Epan138.48137.02147.56142.11137.02146.91142.11150.40143.65132.93
IPW287.92288.30291.06287.58288.29290.29287.58304.14285.32276.17
Pair Matching (bias corrected)0.72−0.55−2.12−2.97−0.56−2.40−2.9713.34−1.44−0.13
MSE (×1000)
Pair Matching110.27109.94119.95114.58109.94118.23114.58114.02116.67113.05
Ridge M. Epan90.4689.4699.9594.0989.4698.2894.0993.3696.9491.24
IPW158.59160.06170.16163.64160.06167.97163.64168.70164.99156.69
Pair Matching (bias corrected)60.6060.6365.4462.8160.6465.2862.8160.9564.6863.21
Panel B: Severe contamination, two covariates (p = 2)
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Pair Matching5.8460.79115.1775.98−7.279.595.22168.07−182.60−85.43
Ridge M. Epan−0.3538.9772.8837.080.821.231.17168.72−200.15−97.69
IPW15.25113.44380.97214.438.3216.0713.91175.00−175.32−77.65
Pair Matching (bias corrected)−2.0548.24104.9168.79−28.770.39−7.03160.14−189.36−92.39
MSE (×1000)
Pair Matching12.8217.4530.0221.0314.1213.9313.9442.3352.5626.01
Ridge M. Epan8.6210.6816.8911.038.849.209.0638.3052.1921.24
IPW10.7122.08189.5984.1811.1012.0711.6341.9143.0918.24
Pair Matching (bias corrected)12.9215.8726.0119.0615.5413.9514.2139.8955.2127.41
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Pair Matching152.34307.84546.35447.5694.94155.58130.00545.29−286.2098.49
Ridge M. Epan138.48278.68502.11411.4096.21144.93125.60533.17−293.8588.63
IPW287.92473.48518.67491.40271.91284.70278.17671.22−142.53233.00
Pair Matching (bias corrected)0.72113.34351.41289.74−96.44−0.35−39.81401.63−430.23−44.21
MSE (×1000)
Pair Matching110.27168.85367.38281.25110.64117.79113.33381.69293.79160.86
Ridge M. Epan90.46133.96305.60231.5890.7097.7293.70352.70250.72128.20
IPW158.59281.51361.13324.48151.63165.62158.94527.29120.48147.94
Pair Matching (bias corrected)60.6063.81171.62138.5178.5764.9467.13220.79343.86115.39
Panel D: Mild contamination, two covariates (p = 2)
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Pair Matching5.84119.67202.83169.20−73.65−6.12−37.15196.41−204.66−7.11
Ridge M. Epan−0.35116.59181.51153.11−63.08−15.09−36.10195.20−210.36−10.70
IPW15.25156.44170.36165.59−24.20−13.84−17.50204.99−194.201.85
Pair Matching (bias corrected)−2.05100.85197.21165.58−92.41−15.47−50.13188.88−212.26−14.77
MSE (×1000)
Pair Matching12.8226.2751.7439.8423.2714.2216.6951.7358.0514.26
Ridge M. Epan8.6221.2541.0931.1814.109.5410.7646.9054.639.57
IPW10.7131.8337.0435.3112.9312.7512.8652.6649.2511.09
Pair Matching (bias corrected)12.9222.0849.4038.5826.8114.3817.9848.9361.3614.60
Note: The results use the propensity score metric and 2000 replications. The statistics presented are the bias and MSE of each estimator (Pair matching, Ridge, IPW and bias corrected Pair matching) scaled by 1000 after reweighting using the naive approach. The bias is calculated by subtracting the true effect of the treatment (1) from the estimate of TOT. Each column represents a contamination type and placement: Clean; BLP; GLP; and vertical outliers, in treatment group (T), in control group (C) and both. Panels A through D represent different combinations of contamination levels and number of covariates.

References

  1. Abadie, Alberto, and Guido W. Imbens. 2006. Large sample properties of matching estimators for average treatment effects. Econometrica 74: 235–67. [Google Scholar] [CrossRef]
  2. Abadie, Alberto, and Guido W. Imbens. 2011. Bias-corrected matching estimators for average treatment effects. Journal of Business & Economic Statistics 29: 1–11. [Google Scholar]
  3. Angiulli, Fabrizio, and Clara Pizzuti. 2002. Fast outlier detection in high dimensional spaces. In European Conference on Principles of Data Mining and Knowledge Discovery. Berlin/Heidelberg: Springer, pp. 15–27. [Google Scholar]
  4. Ashenfelter, Orley. 1978. Estimating the effect of training programs on earnings. The Review of Economics and Statistics 60: 47–57. [Google Scholar] [CrossRef]
  5. Ashenfelter, Orley, and David Card. 1985. Using the Longitudinal Structure of Earnings to Estimate the Effect of Training Programs. The Review of Economics and Statistics 67: 648–60. [Google Scholar] [CrossRef]
  6. Bassi, Laurie J. 1983. The Effect of CETA on the Postprogram Earnings of Participants. The Journal of Human Resources 18: 539–56. [Google Scholar] [CrossRef]
  7. Bassi, Laurie J. 1984. Estimating the effect of training programs with non-random selection. The Review of Economics and Statistics 66: 36–43. [Google Scholar] [CrossRef]
  8. Breunig, Markus M., Hans-Peter Kriegel, Raymond T. Ng, and Jörg Sander. 2000. LOF: Identifying density-based local outliers. In ACM Sigmod Record. New York: ACM, vol. 29, pp. 93–104. [Google Scholar]
  9. Busso, Matias, John DiNardo, and Justin McCrary. 2009. Finite sample properties of semiparametric estimators of average treatment effects. Journal of Business and Economic Statistics. Forthcoming. [Google Scholar]
  10. Busso, Matias, John DiNardo, and Justin McCrary. 2014. New evidence on the finite sample properties of propensity score reweighting and matching estimators. The Review of Economics and Statistics 96: 885–97. [Google Scholar] [CrossRef]
  11. Canavire-Bacarreza, Gustavo, and Merlin M. Hanauer. 2013. Estimating the impacts of bolivia’s protected areas on poverty. World Development 41: 265–85. [Google Scholar] [CrossRef] [Green Version]
  12. Croux, Christophe, and Gentiane Haesbroeck. 2003. Implementing the Bianco and Yohai estimator for logistic regression. Computational Statistics & Data Analysis 44: 273–95. [Google Scholar]
  13. Croux, Christophe, Cécile Flandre, and Gentiane Haesbroeck. 2002. The breakdown behavior of the maximum likelihood estimator in the logistic regression model. Statistics & Probability Letters 60: 377–86. [Google Scholar]
  14. De Vries, Timothy, Sanjay Chawla, and Michael E. Houle. 2010. Finding local anomalies in very high dimensional space. Paper presented at the 2010 IEEE International Conference on Data Mining, Sydney, NSW, Australia, December 13–17; pp. 128–37. [Google Scholar]
  15. Dehejia, Rajeev. 2005. Practical propensity score matching: A reply to Smith and Todd. Journal of Econometrics 125: 355–64. [Google Scholar] [CrossRef]
  16. Dehejia, Rajeev H., and Sadek Wahba. 1999. Causal effects in nonexperimental studies: Reevaluating the evaluation of training programs. Journal of the American Statistical Association 94: 1053–62. [Google Scholar] [CrossRef]
  17. Dehejia, Rajeev H., and Sadek Wahba. 2002. Propensity score-matching methods for nonexperimental causal studies. The Review of Economics and Statistics 84: 151–61. [Google Scholar] [CrossRef] [Green Version]
  18. Donoho, David L. 1982. Breakdown Properties of Multivariate Location Estimators. Technical report. Boston: Harvard University. [Google Scholar]
  19. Frölich, Markus. 2004. Finite-sample properties of propensity-score matching and weighting estimators. The Review of Economics and Statistics 86: 77–90. [Google Scholar] [CrossRef]
  20. Hadi, Ali S., AHM Rahmatullah Imon, and Mark Werner. 2009. Detection of outliers. Wiley Interdisciplinary Reviews: Computational Statistics 1: 57–70. [Google Scholar] [CrossRef]
  21. Hausman, Jerry A., and David A. Wise. 1985. Social Experimentation. Chicago: University of Chicago Press for National Bureau of Economic Research. [Google Scholar]
  22. Heckman, James J., and Edward Vytlacil. 2005. Structural equations, treatment effects, and econometric policy evaluation1. Econometrica 73: 669–738. [Google Scholar] [CrossRef] [Green Version]
  23. Heckman, James J., Hidehiko Ichimura, and Petra E. Todd. 1997a. Matching as an econometric evaluation estimator: Evidence from evaluating a job training programme. The Review of Economic Studies 64: 605–54. [Google Scholar] [CrossRef]
  24. Heckman, James J., Jeffrey Smith, and Nancy Clements. 1997b. Making the most out of programme evaluations and social experiments: Accounting for heterogeneity in programme impacts. The Review of Economic Studies 64: 487–535. [Google Scholar] [CrossRef]
  25. Hirano, Keisuke, Guido W. Imbens, and Geert Ridder. 2003. Efficient estimation of average treatment effects using the estimated propensity score. Econometrica 71: 1161–89. [Google Scholar] [CrossRef] [Green Version]
  26. Imbens, Guido W. 2004. Nonparametric estimation of average treatment effects under exogeneity: A review. The Review of Economics and Statistics 86: 4–29. [Google Scholar] [CrossRef]
  27. Jarrell, Michele Glanker. 1994. A comparison of two procedures, the Mahalanobis distance and the Andrews-Pregibon statistic, for identifying multivariate outliers. Research in the Schools 1: 49–58. [Google Scholar]
  28. Keller, Fabian, Emmanuel Muller, and Klemens Bohm. 2012. HiCS: High contrast subspaces for density-based outlier ranking. Paper presented at the 2012 IEEE 28th International Conference on Data Engineering, Arlington, VA, USA, April 1–5; pp. 1037–48. [Google Scholar]
  29. Khan, Shakeeb, and Elie Tamer. 2010. Irregular identification, support conditions, and inverse weight estimation. Econometrica 78: 2021–42. [Google Scholar]
  30. Khandker, Shahidur R., Gayatri B. Koolwal, and Hussain A. Samad. 2009. Handbook on Impact Evaluation: Quantitative Methods and Practices. Washington, DC: World Bank Publications. [Google Scholar]
  31. King, Gary, Christopher Lucas, and Richard A. Nielsen. 2017. The Balance-Sample Size Frontier in Matching Methods for Causal Inference. American Journal of Political Science 61: 473–89. [Google Scholar] [CrossRef] [Green Version]
  32. Knorr, Edwin M., Raymond T. Ng, and Vladimir Tucakov. 2000. Distance-based outliers: Algorithms and applications. The VLDB Journal—The International Journal on Very Large Data Bases 8: 237–53. [Google Scholar] [CrossRef]
  33. LaLonde, Robert J. 1986. Evaluating the econometric evaluations of training programs with experimental data. The American Economic Review 76: 604–20. [Google Scholar]
  34. Maronna, Ricardo A., R. Douglas Martin, and Victor Yohai. 2006. Robust Statistics. Chichester: John Wiley & Sons. [Google Scholar]
  35. Orair, Gustavo H., Carlos H. C. Teixeira, Wagner Meira Jr., Ye Wang, and Srinivasan Parthasarathy. 2010. Distance-based outlier detection: Consolidation and renewed bearing. Proceedings of the VLDB Endowment 3: 1469–80. [Google Scholar] [CrossRef]
  36. Osborne, Jason W., and Amy Overbay. 2004. The power of outliers (and why researchers should always check for them). Practical Assessment, Research & Evaluation 9: 1–12. [Google Scholar]
  37. Rasmussen, Jeffrey Lee. 1988. Evaluating outlier identification tests: Mahalanobis D squared and Comrey Dk. Multivariate Behavioral Research 23: 189–202. [Google Scholar] [CrossRef] [PubMed]
  38. Rousseeuw, Peter J., and Annick M. Leroy. 2005. Robust Regression and Outlier Detection. Hoboken: John Wiley & Sons, vol. 589. [Google Scholar]
  39. Rousseeuw, Peter J., and Bert C. Van Zomeren. 1990. Unmasking multivariate outliers and leverage points. Journal of the American Statistical Association 85: 633–39. [Google Scholar] [CrossRef]
  40. Rubin, Donald B. 1974. Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology 66: 688. [Google Scholar] [CrossRef] [Green Version]
  41. Schwager, Steven J., and Barry H. Margolin. 1982. Detection of multivariate normal outliers. The Annals of Statistics 10: 943–54. [Google Scholar] [CrossRef]
  42. Seifert, Burkhardt, and Theo Gasser. 2000. Data adaptive ridging in local polynomial regression. Journal of Computational and Graphical Statistics 9: 338–60. [Google Scholar]
  43. Smith, Jeffrey A., and Petra E. Todd. 2005. Does matching overcome LaLonde’s critique of nonexperimental estimators? Journal of Econometrics 125: 305–53. [Google Scholar] [CrossRef] [Green Version]
  44. Stahel, Werner A. 1981. Robuste Schätzungen: Infinitesimale optimalität und Schätzungen von Kovarianzmatrizen. Zürich: Eidgenössische Technische Hochschule [ETH]. [Google Scholar]
  45. Stevens, James P. 1984. Outliers and influential data points in regression analysis. Psychological Bulletin 95: 334. [Google Scholar] [CrossRef]
  46. Stuart, Elizabeth A., Brian K. Lee, and Finbarr P. Leacy. 2013. Prognostic score–based balance measures can be a useful diagnostic for propensity score methods in comparative effectiveness research. Journal of Clinical Epidemiology 66: S84–S90.e1. [Google Scholar] [CrossRef] [PubMed]
  47. Verardi, Vincenzo, and Alice McCathie. 2012. The S-Estimator of Multivariate Location and Scatter in Stata. The Stata Journal: Promoting Communications on Statistics and Stata 12: 299–307. [Google Scholar] [CrossRef] [Green Version]
  48. Verardi, Vincenzo, and Christophe Croux. 2009. Robust Regression in Stata. The Stata Journal 9: 439–53. [Google Scholar] [CrossRef] [Green Version]
  49. Verardi, Vincenzo, Marjorie Gassner, and Darwin Ugarte. 2012. Robustness for Dummies. ECARES Working Papers. Brussels: ECARES. [Google Scholar]
  50. Zimmerman, Donald W. 1994. A note on the influence of outliers on parametric and nonparametric tests. The Journal of General Psychology 121: 391–401. [Google Scholar] [CrossRef]
1.
For a complete discussion and examples of the relationship between randomized and non-randomized experiments and their bias, see LaLonde (1986); Heckman et al. (1997a); Heckman et al. (1997b).
2.
We follow Jarrell (1994); Rasmussen (1988); Stevens (1984) in defining outliers as those few observations that behave atypically from the bulk of the data and are therefore much larger or smaller values than those of the remaining observations in the sample.
3.
In the regression analysis framework these type of outliers affects the coefficients of the regression.
4.
Recall that the Mahalanobis distance M D i = ( X i θ ) Σ 1 ( X i θ ) is calculated using the squared distances to the centroid; that is:
( θ ^ , Σ ^ ) = arg min θ , Σ d e t ( Σ ) , such   that p = 1 n i = 1 n ( X i θ ) Σ 1 ( X i θ ) 2
In the S-estimator this is replaced with
( θ ^ , Σ ^ ) = arg min θ , Σ d e t ( Σ ) , such   that b = 1 n i = 1 n ρ ( X i θ ) Σ 1 ( X i θ )
The estimated parameters of location ( θ ^ ) and dispersion ( Σ ^ ) are then estimated simultaneously using the Tukey biweight function as the ρ function. See Verardi and McCathie (2012) for more details.
5.
For specific details about this method, see Verardi et al. (2012) and Maronna et al. (2006). A Stata code to implement this tool is available upon request.
6.
Since dummies are partialled out in these strategies, they do not represent an issue for the estimators.
7.
Verardi et al. (2012) also propose a weighted regression using the weighting function in order to preserve the original sample: w ( δ ) = m i n 1 , e χ q 2 δ where q = p + 2 .
8.
For a brief explanation of matching techniques see Canavire-Bacarreza and Hanauer (2013).
9.
When there is more than one covariate, β is a vector of coefficients equal to 0.5 for each covariate.
10.
Since α = 0 and X i N ( 0 , 1 ) , then T * is also normally distributed with mean 0, thus, on average half the population will receive the treatment.
11.
All the overlap plots are based on density estimation.
12.
Results for the remaining designs are available from the corresponding author upon request.
13.
In this case the correlation for the clean data should be equal to 1.
14.
Results for the remaining designs are available from the corresponding author upon request.
15.
Note that although we searched for the single closest match, as will be shown below, the illustration discussed above holds for different matching methods.
16.
Recall that the true effect of the treatment is equal to one.
17.
Since the Pair Matching estimator in Table 4 uses 1 nearest neighbor for the matching process, we tested two cases M = 1 and M = 5 for nearest-neighbor matching and bias adjusted nearest-neighbor matching. The results are depicted in Table A3 in Appendix A and show that, in general, increasing the number of matching neighbors, slightly increases the bias of the estimator and decreases its variance (whether it is biased corrected or not).
18.
Results for the Stahel (1981) and Donoho (1982) method as mentioned in Section 3 are very similar and are presented in Table A4 in Appendix A.
19.
Similar results were obtained when applying the SD tool to other estimators (see Verardi et al. (2012)).
20.
Similar results are obtained when using the Smultiv/SD algorithms to check for outliers when estimating the Average Treatment Effect (ATE) and also when changing the number of neighbors in nearest neighbor matching.
21.
DW applied propensity score matching estimators to subsamples of the same experimental data from the National Supported Work (NSW) Demonstration and the same non-experimental data from the Current Population Survey (CPS) and the Panel Study of Income Dynamics (PSID) analyzed by LaLonde (1986). ST re-estimated DW’s model using three samples: LaLonde’s full sample, DW’s sub-sample, and a third sub-sample (ST-sample). See Dehejia and Wahba (1999, 2002), and Smith and Todd (2005) for details.
22.
We would like to thank professor Smith for kindly sharing his data with us.
23.
The experimental treatment effect is the treatment effect from the randomized experiment in LaLonde (1986).
24.
The specifications for the PSID comparison group are age, age squared, schooling, schooling squared, no high school degree, married, black, Hispanic, real earnings in 1974, real earnings in 1974 squared, real earnings in 1975, real earnings in 1975 squared, dummy zero earning in 1974, dummy zero earning in 1975, and Hispanic * dummy zero earning in 1974. The specifications for the CPS group are age, age squared, age cubed, schooling, schooling squared, no high school degree, married, black, Hispanic, real earnings in 1974, real earnings in 1975, dummy zero earning in 1974, dummy zero earning in 1975, and Hispanic * dummy zero earnings in 1974.
Figure 1. Classification of outliers. Source: Verardi and Croux (2009).
Figure 1. Classification of outliers. Source: Verardi and Croux (2009).
Econometrics 09 00019 g001
Figure 2. Overlap plots for the designs. Source: Authors’ calculations.
Figure 2. Overlap plots for the designs. Source: Authors’ calculations.
Econometrics 09 00019 g002
Figure 3. Effect of bad leverage points on the propensity score. Source: Authors’ calculations.
Figure 3. Effect of bad leverage points on the propensity score. Source: Authors’ calculations.
Econometrics 09 00019 g003
Figure 4. Effect of good leverage points on the propensity score. Source: Authors’ calculations.
Figure 4. Effect of good leverage points on the propensity score. Source: Authors’ calculations.
Econometrics 09 00019 g004
Figure 5. Effect of bad leverage points on the Mahalanobis distance. Source: Authors’ calculations.
Figure 5. Effect of bad leverage points on the Mahalanobis distance. Source: Authors’ calculations.
Econometrics 09 00019 g005
Table 1. Effect of outliers on propensity score.
Table 1. Effect of outliers on propensity score.
Panel A: Severe contamination, ten covariates (p = 10)
Bad Leverage PointGood Leverage Point
EstimatorCleanin Tin Cin T and Cin Tin Cin T and C
Coefficient (beta)0.51−0.05−0.04−0.040.510.510.51
MSE (×1000)3.57297.54296.84295.303.743.723.72
Propensity Score
Mean0.500.500.500.500.500.500.50
Growth in Variance −11.0%−11.0%−11.1%5.0%5.0%4.8%
Growth in Kurtosis 5.7%5.7%5.8%−1.9%−1.8%−1.8%
Loss in Correlation −7.9%−7.8%−7.8%−2.4%−2.4%−2.3%
Panel B: Severe contamination, two covariates (p = 2)
Bad Leverage PointGood Leverage Point
EstimatorCleanin Tin Cin T and Cin Tin Cin T and C
Coefficient (beta)0.500.000.000.000.510.510.51
MSE (×1000)2.28246.01245.58245.822.292.332.36
Propensity Score
Mean0.500.500.500.500.500.500.50
Growth in Variance −50.9%−50.9%−50.9%19.3%19.3%18.4%
Growth in Kurtosis 18.0%18.0%18.0%4.4%4.3%4.9%
Loss in Correlation −29.7%−29.6%−29.6%−8.3%−8.3%−8.0%
Panel C: Mild contamination, ten covariates (p = 10)
Bad Leverage PointGood Leverage Point
EstimatorCleanin Tin Cin T and Cin Tin Cin T and C
Coefficient (beta)0.510.090.090.090.530.530.52
MSE (×1000)3.57170.62169.52172.473.683.673.63
Propensity Score
Mean0.500.500.500.500.500.500.50
Growth in Variance −11.3%−11.3%−11.3%5.0%5.0%4.8%
Growth in Kurtosis 6.1%6.1%6.1%−2.1%−2.1%−2.0%
Loss in Correlation −29.7%−29.6%−29.6%−8.3%−8.3%−8.0%
Panel D: Mild contamination, two covariates (p = 2)
Bad Leverage PointGood Leverage Point
EstimatorCleanin Tin Cin T and Cin Tin Cin T and C
Coefficient (beta)0.500.290.290.280.550.550.55
MSE (×1000)2.2846.3546.3347.864.324.404.19
Propensity Score
Mean0.500.500.500.500.500.500.50
Growth in Variance −29.3%−29.3%−29.6%17.1%17.1%16.7%
Growth in Kurtosis 9.3%9.4%9.4%−3.0%−3.0%−2.6%
Loss in Correlation −6.3%−6.2%−6.5%−3.8%−3.8%−3.9%
Note: The results correspond to the propensity score metric and 2000 replications. The estimator section depicts the mean and MSE of the probit coefficients. The propensity score section shows characteristics of the propensity score. Each column represents a contamination type and placement: Clean; BLP; GLP, in treatment group (T), in control group (C) and both. Panels A through D represent different combinations of contamination levels and number of covariates.
Table 2. Effect of outliers on the Mahalanobis distance.
Table 2. Effect of outliers on the Mahalanobis distance.
Panel A: Severe contamination, ten covariates (p = 10)
Bad Leverage PointGood Leverage Point
Mahalanobis DistanceCleanin Tin Cin T and Cin Tin Cin T and C
Mean2.002.042.042.002.042.042.00
Growth in Variance 395.0%394.5%361.0%398.8%398.6%364.2%
Growth in Asymmetry 89.6%89.5%87.7%89.6%89.6%87.7%
Growth in Kurtosis 95.9%95.6%93.1%95.7%95.4%92.7%
Loss in Correlation −76.3%−76.3%−75.6%−75.4%−75.4%−74.7%
Panel B: Severe contamination, two covariates (p = 2)
Bad Leverage PointGood Leverage Point
Mahalanobis DistanceCleanin Tin Cin T and Cin Tin Cin T and C
Mean2.002.022.022.002.032.032.00
Growth in Variance 206.6%207.5%199.2%234.6%234.9%222.7%
Growth in Asymmetry 86.7%87.0%85.6%88.3%88.4%87.0%
Growth in Kurtosis 112.4%112.9%110.1%111.1%111.4%108.9%
Loss in Correlation −64.3%−64.0%−64.0%−63.8%−63.5%−63.3%
Panel C: Mild contamination, ten covariates (p = 10)
Bad Leverage PointGood Leverage Point
Mahalanobis DistanceCleanin Tin Cin T and Cin Tin Cin T and C
Mean2.002.012.012.002.022.022.00
Growth in Variance 106.6%106.2%106.3%129.7%129.6%127.1%
Growth in Asymmetry 76.5%75.8%76.3%81.5%81.0%80.3%
Growth in Kurtosis 118.8%116.5%117.1%121.8%120.5%118.3%
Loss in Correlation −49.2%−49.1%−49.6%−50.4%−50.3%−50.7%
Panel D: Mild contamination, two covariates (p = 2)
Bad Leverage PointGood Leverage Point
Mahalanobis DistanceCleanin Tin Cin T and Cin Tin Cin T and C
Mean2.002.002.002.002.002.002.00
Growth in Variance 1.7%2.0%2.3%19.2%19.2%19.8%
Growth in Asymmetry 3.4%3.5%4.4%21.6%21.3%22.2%
Growth in Kurtosis 7.0%6.9%9.0%37.6%36.9%38.1%
Loss in Correlation −7.8%−7.8%−8.3%−17.7%−17.6%−18.2%
Note: The results correspond to the Mahalanobis distance metric and 2000 replications. The statistics presented are comparable to the ones in Table 1. Each column represents a contamination type and placement: Clean; BLP; GLP, in treatment group (T), in control group (C) and both. Panels A through D represent different combinations of contamination levels and number of covariates.
Table 3. Effect of a bad and a good leverage point on matching assignment.
Table 3. Effect of a bad and a good leverage point on matching assignment.
Col:(1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)
Propensity Score Covariate Matches
Original DataPropensity ScoreMatches (ID)Mahalanobis Distance(ID)
IDyx1x2TP(T)oP(T)blpP(T)glpmomblpmglpMDoMDblpMDglpmomblpmglp
10.940.35−1.14T0.270.300.281411140.620.620.61121111
22.580.58−0.06T0.780.470.751215120.510.020.26121212
32.291.410.67T0.990.550.99129124.400.901.7815138
41.45−0.790.68T0.520.650.568880.661.190.55121411
51.27−0.830.04T0.250.550.30149140.630.780.0412119
61.080.870.36T0.930.520.92129121.730.280.8213911
72.210.351.15T0.960.680.95128122.371.211.988148
8−0.72−1.581.50C0.510.790.59 2.603.741.94
91.26−0.17−0.08C0.470.500.49 0.020.180.04
10−1.471.74−3.12C0.130.060.10 5.986.525.88
110.52−0.25−1.00C0.130.350.16 0.910.690.50
120.040.75−0.51C0.680.380.66 0.500.110.23
13−0.10−0.61−0.90C0.080.380.11 1.390.880.46
14−0.080.30−1.48C0.160.260.16 1.241.141.16
15−1.53−2.301.16C0.140.490.00 4.429.7511.74
Table 4. Simulated bias and MSE of Average Treatment Effect on the Treated (TOT) estimates in the presence of outliers using propensity score.
Table 4. Simulated bias and MSE of Average Treatment Effect on the Treated (TOT) estimates in the presence of outliers using propensity score.
Panel A: Severe contamination, ten covariates (p = 10)
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Pair Matching155.72536.54568.73547.4243.91162.42100.701738.27−1425.95160.84
Ridge M. Epan143.49501.50525.45511.4348.85151.99105.461725.28−1428.19153.34
IPW299.37634.05636.05632.63299.32300.98297.691881.92−1264.44309.81
Pair Matching (bias corrected)−1.48785.56433.26461.78−797.19−4.43−398.521581.07−1570.689.77
MSE (×1000)
Pair Matching112.73353.53384.21361.60134.13121.73122.593113.713808.921005.98
Ridge M. Epan93.06301.00320.69308.11106.14100.44100.953053.333406.23760.11
IPW164.17459.52464.34458.64164.85172.71167.733619.122173.43421.42
Pair Matching (bias corrected)61.73669.61234.13258.86857.3765.18283.122564.563774.03735.33
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Pair Matching9.13367.76367.39364.36−112.2210.93−50.59717.27−705.4712.48
Ridge M. Epan0.29362.47362.71361.51−74.071.111.80709.29−710.402.20
IPW16.56366.56366.76366.7013.1514.5214.14724.71−694.3318.71
Pair Matching (bias corrected)−0.05354.12363.51359.18−353.721.14−176.80708.10−714.093.67
MSE (×1000)
Pair Matching13.17145.73145.98143.3642.9714.2822.53528.13557.9136.83
Ridge M. Epan8.70138.96139.35138.4826.199.399.22512.84535.1419.80
IPW11.37140.60140.75140.7211.4412.3211.91536.91512.4820.91
Pair Matching (bias corrected)13.17134.87143.09139.57184.5214.3059.37515.14571.0137.00
Panel C: Mild contamination, ten covariates (p = 10)
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Pair Matching155.72446.02478.72465.7049.10163.02104.01630.48−318.78157.26
Ridge M. Epan143.49407.01434.69422.1952.56150.44104.68618.03−328.02146.44
IPW299.37553.26555.45557.44288.31289.99287.53774.14−169.77302.51
Pair Matching (bias corrected)−1.48236.68376.49350.68−240.05−2.05−118.73473.28−472.241.89
MSE (×1000)
Pair Matching112.73262.10289.80278.70130.69120.45119.56486.58342.40193.33
Ridge M. Epan93.06212.60234.28223.57104.30100.1499.52455.07297.76153.73
IPW164.17361.29367.50367.74160.30168.08163.66674.21147.06188.24
Pair Matching (bias corrected)61.73100.48187.36170.06151.1166.2390.72286.13398.70121.23
Panel D: Mild contamination, two covariates (p = 2)
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Pair Matching9.13127.38204.31175.95−79.71−3.21−40.49221.57−205.2510.13
Ridge M. Epan0.29125.16183.86160.66−67.73−14.64−38.13212.99−212.920.86
IPW16.56166.69166.07168.93−16.22−14.88−13.95229.01−196.7117.21
Pair Matching (bias corrected)−0.05105.04199.85172.82−106.54−13.39−58.22212.40−214.261.07
MSE (×1000)
Pair Matching13.1727.8452.3943.0225.8713.9817.6762.2459.6315.40
Ridge M. Epan8.7023.3041.9033.8015.149.5611.0954.1456.189.77
IPW11.3735.1436.0736.4312.8313.3813.0563.6251.7612.18
Pair Matching (bias corrected)13.1722.5350.4941.8931.9314.2419.7458.3563.5315.40
Note: The results use the propensity score metric and 2000 replications. The statistics presented are the bias and MSE of each estimator (Pair matching, Ridge, IPW and bias corrected Pair matching) scaled by 1000. The bias is calculated by subtracting the true effect of the treatment (1) from the estimate of TOT. Each column represents a contamination type and placement: Clean; BLP; GLP; and vertical outliers, in treatment group (T), in control group (C) and both. Panels A through D represent different combinations of contamination levels and number of covariates.
Table 5. Effect of outliers on the balance checking measures.
Table 5. Effect of outliers on the balance checking measures.
Panel A: Severe contamination, ten covariates (p = 10)
Bad Leverage PointGood Leverage Point
Cleanin Tin Cin T and Cin Tin Cin T and C
Contaminated variable
Percentage Bias0.200.190.250.070.460.210.31
Variance ratio1.2128.450.143.8731.941.2216.86
Remaining variables
Percentage Bias0.210.170.160.160.260.210.23
Variance ratio1.211.151.141.141.291.221.25
Panel B: Severe contamination, two covariates (p = 2)
Bad Leverage PointGood Leverage Point
Cleanin Tin Cin T and Cin Tin Cin T and C
Contaminated variable
Percentage Bias0.050.040.070.050.330.060.21
Variance ratio1.056.250.171.026.001.063.73
Remaining variables
Percentage Bias0.050.010.010.010.130.060.08
Variance ratio1.061.021.031.031.001.071.02
Panel C: Mild contamination, ten covariates (p = 10)
Bad Leverage PointGood Leverage Point
Cleanin Tin Cin T and Cin Tin Cin T and C
Contaminated variable
Percentage Bias0.200.180.230.190.350.200.24
Variance ratio1.223.630.250.874.021.192.62
Remaining variables
Percentage Bias0.200.170.160.160.250.210.23
Variance ratio1.201.151.141.141.281.221.24
Panel D: Mild contamination, two covariates (p = 2)
Bad Leverage PointGood Leverage Point
Cleanin Tin Cin T and Cin Tin Cin T and C
Contaminated variable
Percentage Bias0.050.090.090.060.100.050.07
Variance ratio1.061.480.600.871.471.011.25
Remaining variables
Percentage Bias0.050.050.070.040.080.060.06
Variance ratio1.060.951.151.071.021.081.04
Note: The results correspond to the propensity score metric for pair matching and 2000 replications. Rows depict the standardized difference (bias) and variance ratio between treated and control groups, for the contaminated variable and also for the remaining variables in the model. Each column represents a contamination type and placement: Clean; BLP; GLP, in treatment group (T), in control group (C) and both. Panels A through D represent different combinations of contamination levels and number of covariates.
Table 6. Simulated bias and MSE of the treatment effect estimates (TOT) using propensity score and reweighted after the Smultiv method.
Table 6. Simulated bias and MSE of the treatment effect estimates (TOT) using propensity score and reweighted after the Smultiv method.
Panel A: Severe contamination, ten covariates (p = 10)
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Pair Matching158.08157.10162.57160.13157.10162.57160.13157.10162.57160.13
Ridge M. Epan155.01153.21163.61157.12153.21163.61157.12153.21163.61157.12
IPW326.35322.38327.74325.20322.38327.74325.20322.38327.74325.20
Pair Matching (bias corrected)−4.61−1.71−1.50−2.46−1.71−1.50−2.46−1.71−1.50−2.46
MSE (×1000)
Pair Matching103.35103.17109.20106.19103.17109.20106.19103.17109.20106.19
Ridge M. Epan89.6889.2096.0492.1889.2096.0492.1889.2096.0492.18
IPW168.24165.36173.85170.05165.36173.85170.05165.36173.85170.05
Pair Matching (bias corrected)57.2259.2459.5360.2159.2459.5360.2159.2459.5360.21
Panel B: Severe contamination, two covariates (p = 2)
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Pair Matching13.9814.4017.0118.3214.4017.0118.3214.4017.0318.31
Ridge M. Epan0.061.221.742.331.221.742.331.221.732.33
IPW50.5644.6750.4347.7544.6750.4347.7544.6750.4347.75
Pair Matching (bias corrected)−1.370.501.293.650.501.293.650.501.303.65
MSE (×1000)
Pair Matching12.8913.1513.8713.3913.1513.8713.3913.1513.8813.38
Ridge M. Epan8.518.949.038.958.949.038.958.949.028.95
IPW10.9410.8111.3911.1310.8111.3911.1310.8111.3911.13
Pair Matching (bias corrected)13.0813.1914.0313.3513.1914.0313.3513.1914.0313.35
Panel C: Mild contamination, ten covariates (p = 10)
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Pair Matching158.08159.37184.47170.38155.57162.66159.47187.90143.09165.59
Ridge M. Epan155.01156.83182.20169.61151.49163.50156.27186.61142.49163.68
IPW326.35327.93340.40334.92321.77327.41324.76354.28305.75330.53
Pair Matching (bias corrected)−4.611.9522.808.90−5.46−1.03−3.6130.80−20.013.87
MSE (×1000)
Pair Matching103.35102.88119.55109.51103.60109.63106.28114.93106.79110.93
Ridge M. Epan89.6889.81103.8796.2689.1896.0292.12101.2192.0795.36
IPW168.24168.47181.48176.14165.14173.69169.86187.75160.26173.85
Pair Matching (bias corrected)57.2258.2262.0560.1859.7159.5760.5460.0163.4261.48
Panel D: Mild contamination, two covariates (p = 2)
Bad Leverage PointGood Leverage PointVertical Outliers in Y
EstimatorCleanin Tin Cin T and Cin Tin Cin T and Cin Tin Cin T and C
BIAS (×1000)
Pair Matching13.9866.06112.4987.75−21.009.60−3.57143.83−99.2922.02
Ridge M. Epan0.0655.7794.4075.45−29.71−6.86−17.36129.81−118.995.99
IPW50.56105.41126.67114.8029.9237.1234.51172.36−64.9454.20
Pair Matching (bias corrected)−1.3750.92101.4976.23−40.72−6.36−21.52129.50−115.006.88
MSE (×1000)
Pair Matching12.8917.3324.9320.2515.3613.2913.7634.2523.6914.21
Ridge M. Epan8.5112.2517.8814.9010.499.219.4926.1424.029.65
IPW10.9419.3524.1821.409.8610.2510.2038.8913.4412.07
Pair Matching (bias corrected)13.0815.8222.7218.6117.1813.7314.7530.5827.4814.18
Note: The results use the propensity score metric and 2000 replications. The statistics presented are the bias and MSE of each estimator (Pair matching, Ridge, IPW and bias corrected Pair matching) scaled by 1000 after reweighting using the SMULTIV method. The bias is calculated by subtracting the true effect of the treatment (1) from the estimate of TOT. Each column represents a contamination type and placement: Clean; BLP; GLP; and vertical outliers, in treatment group (T), in control group (C) and both. Panels A through D represent different combinations of contamination levels and number of covariates.
Table 7. Treatment effect estimates of the LaLonde and DW samples.
Table 7. Treatment effect estimates of the LaLonde and DW samples.
Comparison GroupTreatment GroupExperimental TOTEstimated TOTEstimated SD-TOTEstimated Smultiv-TOT
PSID [2490 obs]LaLonde [297 obs]886−1390 (966)−870 (1012)−514 (1116)
PSID [2490 obs]Dehejia-Wahba [185 obs]1794990 (1255)1306 (1662)2135 (1325)
CPS [15992 obs]LaLonde [297 obs]886−4001 (563)−2884 (713)−3130 (1070)
CPS [15992 obs]Dehejia-Wahba [185 obs]17941566 (770)1824 (890)1849 (819)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Canavire-Bacarreza, G.; Castro Peñarrieta, L.; Ugarte Ontiveros, D. Outliers in Semi-Parametric Estimation of Treatment Effects. Econometrics 2021, 9, 19. https://doi.org/10.3390/econometrics9020019

AMA Style

Canavire-Bacarreza G, Castro Peñarrieta L, Ugarte Ontiveros D. Outliers in Semi-Parametric Estimation of Treatment Effects. Econometrics. 2021; 9(2):19. https://doi.org/10.3390/econometrics9020019

Chicago/Turabian Style

Canavire-Bacarreza, Gustavo, Luis Castro Peñarrieta, and Darwin Ugarte Ontiveros. 2021. "Outliers in Semi-Parametric Estimation of Treatment Effects" Econometrics 9, no. 2: 19. https://doi.org/10.3390/econometrics9020019

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop