Skip to contents

We’ve promised to provide sandwich-type SEs for estimates of treatment effect, with stacking of estimated equations as appropriate to address possible use of separate regression fits to estimate covariance adjustment terms and then to estimate marginal treatment effects. Often ``sandwich standard error’’ conjures an expectation of standard errors that are model-based in the sense of assuming i.i.d. data (Ai,Wi,Xi,Y0i,YKi)(A_i, W_{i}, X_i, Y_{0i}, \ldots Y_{Ki}) and making inference conditional on the values of treatment assignments A{0,,K}A \in \{0, \ldots, K\}, importance or frequency weights WW, and covariates XX, but not the values of potential responses {Yk:kK}\{Y_k: k \leq K\}. (This document takes ii to range over intact clusters.) With the i.i.d assumption, these model-based SEs are valid even if the design specification isn’t, provided that the outcome model is well-specified. The role of the study design is to provide analysis weights which, if respected, ensure desired interpretations of marginal effect estimators if the YY-model happens to be misspecified.

An alternative would be to offer sandwich standard errors under a design-based interpretation, differing from the above principally in terms of conditioning on

𝒵=σ({(Xi,Y0i,YKi):i},{iS[Ai=k]:k{0,,K};S𝒫})\mathcal{Z} = \sigma(\{(X_i, Y_{0i}, \ldots Y_{Ki}): i\}, \{\sum_{i \in S}\, \![A_i=k\!] : k \in \{0, \ldots, K\}; S \in \mathcal{P}\} )

where 𝒫\mathcal{P} is a given partition of {1,, n} encoding a stratification. Of particular interest are standard errors for Hajek estimators, i.e. estimators of form μ̂k[𝐯]=iw̌i[k][[Ai=k]]viiw̌i[k][[Ai=k]]\hat{\mu}_k[\mathbf{v}] = \frac{\sum_i \check{w}_{i[k]} [\![A_i=k]\!] v_i}{\sum_i \check{w}_{i[k]} [\![A_i=k]\!]} and their differences μ̂k[𝐮]μ̂0[𝐯].\begin{equation}\label{eq:hajekdiff} \hat{\mu}_k[\mathbf{u}] - \hat{\mu}_0[\mathbf{v}]. \end{equation} The case weights {w̌i[k]:in,kK}\{\check{w}_{i[k]}:i\leq n, k\leq K\} are the products of a frequency or importance weight wiw_{i} and, as appopriate, a weighting factor representing a reciprocal of ii’s probability (or odds) of assignment to kk (or to one of {k,,Kk, \ldots, K}); in general differ by kk. With w̌i[j]=[(Ai=j𝒵]1\check{w}_{i[j]} =[\mathbb{P}(A_{i}=j \mid \mathcal{Z}]^{-1} for j=0,,Kj=0, \ldots, K, μ̂k[𝐲k]μ̂0[𝐲0]\hat{\mu}_k[\mathbf{y}_{k}] - \hat{\mu}_0[\mathbf{y}_{0}] is the natural estimate of the effect of treatment kk within the experimental or quasiexperimental study sample, n1i=1n(ykiyk0)n^{-1}\sum_{i=1}^{n}(y_{ki}-y_{k0}). This is also known as the sample average treatment effect. If the experimental observations are samples from a broader super-population, with sample inclusion probabilities π1,,πn\pi_{1}, \ldots, \pi_{n}, then setting w̌i[j]=πi1[(Ai=j)]1\check{w}_{i[j]} =\pi_{i}^{-1}[\mathbb{P}(A_{i}=j)]^{-1} for j=0,,Kj=0, \ldots, K makes μ̂k[𝐲k]μ̂0[𝐲0]\hat{\mu}_k[\mathbf{y}_{k}] - \hat{\mu}_0[\mathbf{y}_{0}] into the corresponding natural estimate of the population average treatment effect.

In one special case a design-based SE for has been available for some time, with an optional interpretation in terms of M-estimation emerging more recently. Suppose:

In this case the variance estimate used in the 2 sample tt test without pooling of variances validly estimates the variance of , in the following sense. If vv is observed only when A=kA=k while uu is observed only when A=0A=0, then svu=(n1)1i=1n(viv)(uiv)s_{vu} = (n-1)^{-1} \sum_{i=1}^n (v_i - \bar v)(u_i - \bar v) is not identified; as far as the data are concerned, it could lie anywhere between ±svsu\pm s_{v} s_{u}, where sv2=svv=(n1)1i=1n(viv)2s_{v}^2 = s_{vv} = (n-1)^{-1} \sum_{i=1}^n (v_i - \bar v)^2. Assuming svu=svsus_{vu} = s_{v}s_{u}, then the ordinary unpooled variance is unbiased for the design-based variance (conditional variance given 𝒵\mathcal{Z}) of . If on the other hand suvsvsus_{uv} \leq s_{v}s_{u}, then the expected value of this ordinary unpooled variance exceeds ’s design-based variance . The possibilities svu=svsus_{vu} = s_{v}s_{u} and svusvsus_{vu} \leq s_{v}s_{u} being exhaustive, by Cauchy-Schwartz, the expected value of the unpooled variance can be no less than Var(μ̂k[𝐮]μ̂0[𝐯]𝒵)\mathrm{Var}(\hat{\mu}_k[\mathbf{u}] - \hat{\mu}_0[\mathbf{v}] \mid \mathcal{Z}); it is valid as a possibly conservative estimate in that it cannot be negatively biased.

As shown by , in this case the same variance estimate coincides with the HC2 flavor of the Huber-White estimate. As a result, it’s sometimes said that ordinary sandwich or cluster-robust variance estimates admit of design-based interpretations provided that one uses the HC2 form. I suspect this works only in special cases, however, perhaps not going far beyond the one delimited above.

The use of the Neyman insight to get design-based SEs through Huber-White covariance estimates intended for model-based interpretation has several important limitations.

The M-estimation framework promises solutions to many or all of these impediments.

The Hajek estimators μ̂0[𝐲0]\hat{\mu}_{0}[\mathbf{y}_{0}], , μ̂k[𝐲k]\hat{\mu}_{k}[\mathbf{y}_{k}] admit of being defined as the unique multivariate root of the estimating function (μ0,,μK)[i[[ai=K]]w̌i[K](yKiμK)i[[ai=0]]w̌i[0](y0iμ0)].\begin{equation} \label{eq:ee-hajekdiff} (\mu_{0}, \ldots, \mu_{K}) \mapsto \left[ \begin{array}{c} \sum_i [\![a_i=K]\!] \check{w}_{i[K]} (y_{Ki} - \mu_K) \\ \vdots \\ \sum_i [\![a_i=0]\!] \check{w}_{i[0]} (y_{0i} - \mu_0) \end{array} \right] . \end{equation}

The information matrices A(μ0,,μK)A(\mu_0, \ldots, \mu_K) corresponding to are diagonal. For a sandwich estimate of variance, then, we only need estimates of the covariance of~. In the one-stratum case with multiple representatives of conditions kk and 00, we can simply apply Neyman’s two-sample unpooled variances SE to observations {w̌i[k](ykiμk):Ai=k}\{ \check{w}_{i[k]} (y_{ki} - \mu_k) : A_i = k\} and {w̌i[0](y0iμ0):Ai=0}\{ \check{w}_{i[0]} (y_{0i} - \mu_0) : A_i = 0\}. Impediment~ is solved.

For standard errors of the difference of Hajek estimators within a subgroup gg, iw̌i[k][[Ai=k,Gi=g]]ykiiw̌i[k][[Ai=k,Gi=g]]iw̌i[0][[Ai=0,Gi=g]]y0iiw̌i[0][[Ai=0,Gi=g]],\begin{equation*} \frac{\sum_i \check{w}_{i[k]} [\![A_i=k, G_{i} =g]\!] y_{ki}}{\sum_i \check{w}_{i[k]} [\![A_i=k, G_{i}=g]\!]} - \frac{\sum_i \check{w}_{i[0]} [\![A_i=0, G_{i} =g]\!] y_{0i}}{\sum_i \check{w}_{i[0]} [\![A_i=0, G_{i}=g]\!]}, \end{equation*} we can simply fold the subgroup indicator into the weights, weighting by {w̃i[j]:j;i}={w̌i[j][[Gi=g]]:j;i}\{\tilde{w}_{i[j]}:j; i\} = \{\check{w}_{i[j]}[\![G_{i}=g]\!]: j; i\} as opposed to {w̌i[j]:j;i}\{\check{w}_{i[j]}:j; i\}. This solves the first part of Impediment~.

(The remaining piece of Impediment~ is the need for estimates of Cov(iw̌i[k][[Ai=k]]vkiiw̌i[k][[Ai=k]]iw̌i[k][[Ai=k]]v0iiw̌i[k][[Ai=k]],iw̌ki[[Ai=k]]ukiiw̌ki[[Ai=k]]iw̌ki[[Ai=k]]u0iiw̌ki[[Ai=k]])\begin{equation*} \mathrm{Cov}\left( \frac{\sum_i \check{w}_{i[k]} [\![A_i=k]\!] v_{ki}}{\sum_i \check{w}_{i[k]} [\![A_i=k]\!]} - \frac{\sum_i \check{w}_{i[k]} [\![A_i=k]\!] v_{0i}}{\sum_i \check{w}_{i[k]} [\![A_i=k]\!]}, \frac{\sum_i \check{w}_{ki} [\![A_i=k]\!] u_{ki}}{\sum_i \check{w}_{ki} [\![A_i=k]\!]} - \frac{\sum_i \check{w}_{ki} [\![A_i=k]\!] u_{0i}}{\sum_i \check{w}_{ki} [\![A_i=k]\!]} \right) \end{equation*} where vjiv_{ji} and ujiu_{ji} are observed only when Ai=jA_{i}=j, all ii and jj. I haven’t had the opportunity to sit down and try to work this out, but I’m optimistic that this can be handled with straightforward extensions of the Neyman insight.)

Under the design-based interpretation, variances and covariances of quantile regression estimating functions — for a quantile regression with covariates XX and cic_{i} observations per cluster ii, βi=1nj=1cik=0Kxij([[ykijXijβ,ai=k)]]τ)\begin{equation*} \beta \mapsto \sum_{i=1}^{n}\sum_{j=1}^{c_{i}}\sum_{k=0}^{K} x_{ij}([\![y_{kij} - X_{ij}'\beta, a_{i}= k)]\!] - \tau) \end{equation*} — are likely to be estimable in much the same way as variances and covariances of estimating functions for other forms of regression. In any case, there will be no need for estimation of a sparsity parameter. The impediment is removed.

(Quantile regression doesn’t bear any of its usual interpretations under conditioning on 𝒵\mathcal{Z}, but that doesn’t matter for purposes of using it to estimate slopes in an RDD.)

Now consider Impediment~, strata SS in which one or both of the conditions jj being contrasted have only one representative, iS[[Ai=j]]=1\sum_{i\in S} [\![A_{i}=j]\!] =1. Consider the situation without covariates.

As there is independence across if not within strata, the B matrices corresponding to~ are sums of stratum-wise contributions BS((μ0,,μK))=Cov([iS[[Ai=K]]w̌i[K](yKiμK)iS[[Ai=0]]w̌i[0](y0iμ0)]𝒵),S𝒫.\begin{equation} \label{eq:Bform} B_S((\mu_0, \ldots, \mu_K)) = \mathrm{Cov}\left(\left[ \begin{array}{c} \sum_{i\in S} [\![A_i=K]\!] \check{w}_{i[K]} (y_{Ki} - \mu_K) \\ \vdots \\ \sum_{i\in S} [\![A_i=0]\!] \check{w}_{i[0]} (y_{0i} - \mu_0) \end{array} \right] \mid \mathcal{Z}\right), \quad S \in \mathcal{P}. \end{equation} For any stratum SS and condition kk s.t. iS[[Ai=k]]=1\sum_{i \in S} [\![A_i=k]\!] =1, Var{iS[[Ai=k]]w̌i[k](ykiμk)}\mathrm{Var}\left\{ \sum_{i\in S} [\![A_i=k]\!] \check{w}_{i[k]} (y_{ki} - \mu_k) \right\} is not estimable, just as the variance [(#S)1]1iS(ykiyk)2[(\# S)-1]^{-1}\sum_{i \in S} (y_{ki} - \bar{y}_k)^2 is not estimable. On the other hand, the second moment 𝔼[{iS[[Ai=k]]w̌i[k](ykiμk)}2] \mathbb{E}\left[\left\{ \sum_{i\in S} [\![A_i=k]\!] \check{w}_{i[k]} (y_{ki} - \mu_k) \right\}^2\right] estimable, by it sample realization {iS[[ai=k]]w̌i[k](ykiμk)}2\left\{ \sum_{i\in S} [\![a_i=k]\!] \check{w}_{i[k]} (y_{ki} - \mu_k) \right\}^2: in M-estimation, we’re entitled to treat μk\mu_k is a fixed constant, not a random variable. Since the second moment bounds the variance from above, {iS[[Ai=k]]w̌i[k](ykiμk)}2\{\sum_{i\in S} [\![A_i=k]\!] \check{w}_{i[k]} (y_{ki} - \mu_k)\}^2 is a safe, conservative variance estimate.

M-estimation doesn’t help us to estimate Cov{iS[[Ai=k]]w̌i[k](ykiμk),iS[[Ai=0]]w̌i[0](y0iμ0)} \mathrm{Cov}\left\{ \sum_{i\in S} [\![A_i=k]\!] \check{w}_{i[k]} (y_{ki} - \mu_k), \sum_{i\in S} [\![A_i=0]\!] \check{w}_{i[0]} (y_{0i} - \mu_0) \right\} but these terms aren’t identified even for large strata with multiple representatives of either condition. When interest is in the difference μ̂kμ̂0\hat{\mu}_k - \hat{\mu}_0, conservatism leads us to ``impute’’ the covariance value that maximizes the variance of that difference, that corresponding to a correlation of 1 between iS[[Ai=k]]w̌i[k](ykiμk)\sum_{i\in S} [\![A_i=k]\!] \check{w}_{i[k]} (y_{ki} - \mu_k) and iS[[Ai=0]]w̌i[0](y0iμ0)\sum_{i\in S} [\![A_i=0]\!] \check{w}_{i[0]} (y_{0i} - \mu_0). We can follow the same principle when approaching the problem from the perspective of MM-estimation, whatever the representation of conditions kk and 0 within stratum SS.

In separate hand-written notes I begin fleshing this out into methods of estimating design-based second moments of first differences of the estimating functions that define the Hajek estimators, which can in turn be used to approximate the corresponding B-matrix terms; extending this work might be a suitable problem for a PhD student.

%

If covariate XX are included in a linear regression specification with unit weights j=1Kw̌i[j][[ai=j]]\sum_{j=1}^{K}\check{w}_{i[j]}[\![a_{i}=j]\!] — that is, the specification that without XX would have engendered estimation of Hajek estimator differences as in — then B-matrix estimation can still be done just as it is without the covariates. The A matrices are no longer diagonal, but are otherwise straightforward. Combining estimates of these A and B matrices would give legitimate standard errors for differences of Hajek estimators as before, but now with adjustment for covariates.

%

Another approach to covariates is to have fit their coefficients in a separate and prior regression fit — perhaps a robust linear fit; or perhaps a binary regression fit — and then to contrast Hajek aggregates of residuals from these fits . This approach calls for stacked estimating equations, with accompanying complications to both the A and the B matrix. We might as well consider it in tandem with challenges and .

To draw a distinction between experimental versus non-experimental observations, let 𝒫\mathcal{P} be a partition of a nonempty subset of {1,,n}\{1, \ldots, n\}, so that some but not necessarily all clusters 1,, nn fall into some stratum S𝒫S \in \mathcal{P}; these clusters are then the experimental or quasi-experimental sample. Re-define 𝒵\mathcal{Z} to contain similar information as above but also full information on clusters falling outside of =S𝒫S\mathcal{E} = \bigcap_{S \in \mathcal{P}}S: 𝒵=σ({(Xi,Y0i,YKi):i},{iS[Ai=k]:k{0,,K};S𝒫},{(Ai,Xi,Y0i,YKi):i}).\begin{equation} \label{eq:Zdefextended} \mathcal{Z} = \sigma\left(\begin{array}{c} \{(X_i, Y_{0i}, \ldots Y_{Ki}): i \in \mathcal{E}\}, \{\sum_{i \in S}\, \![A_i=k\!] : k \in \{0, \ldots, K\}; S \in \mathcal{P}\} , \\ \{(A_{i}, X_i, Y_{0i}, \ldots Y_{Ki}): i \not\in\mathcal{E}\} \end{array}\right). \end{equation} One now needs A and B matrices for estimating functions of the Hajek estimator and the regression estimator simultaneously. Given that we’re ultimately interested only to estimate covariances for the Hajek differences, simplified expressions in terms of A- and B-submatrices are available . Further simplification of B matrices is possible given our conditioning on 𝒵\mathcal{Z} in ; the B matrix continues to be a sum of form~, with clusters ii\not\in \mathcal{E} making no contribution. (The design-based interpretation has no bearing on calculation of AA matrices.)

References