The 2013 Nobel prize in economics was won by Fama, Shiller, and some other dude, according to most media accounts. Fama and Shiller were pretty easy to explain: one of them is at Chicago and is associated with a theory called “efficient markets,” so he’s the free market guy. Shiller criticized the Chicago guy, so we know where to put him on the political spectrum. But this third guy, Hansen, well, he’s at Chicago, but he does some sort of theoretical econometrics, so if we’re the Guardian we’ll just assume he’s “ultraconservative” and then ignore him, or if we’re anyone else we’ll skip to just ignoring him (even the Economist gives up, complaining they can’t explain his work without “writing all sorts of equations in our newspaper.”) This post attempts to provide a relatively gentle, albeit with all sorts of equations, introduction to part of the third guys’s research, focusing on applications in causal modeling in microeconomics rather than the examples from finance or macroeconomics.
There are some good discussions of Hansen’s most influential contribution, the Generalized Method of Moments (GMM) in the economic blogsphere, examples include Guan Yang, John Cochrane, and Jeff Leek. This post presents another, which differs mostly in that the discussion does not focus on applications in asset pricing. The basic ideas in Hansen (1982) are elaborations and generalizations of ideas presented in Sargan (1958), which develops overidentified instrumental variables estimators in a modern context, a method mostly used to infer causal effects from observational data.
GMM in a very simple problem.
Suppose we have a sample on some variable of size and we would like to estimate the mean of denoted . In this simple case, the method of moments tells us to estimate by replacing the population condition
$$ E[ y – \mu ] = 0 $$
with its sample analog,
$$ \frac{1}{n}\sum_i [ y_i – \hat\mu ] = 0, $$
where our estimator is the value of the parameter which makes the equation above true. The method of moments estimator of is simply the sample mean of , denoted .
We draw another observations on a different random variable . Suppose theory tells us that the mean of is the same as the mean of . Following the same reasoning as above, we could estimate using the sample mean of , . But using either of these estimates alone cannot be efficient, as we are wasting the information in the sample we don’t use. Theory tells us that both of these conditions are true
\begin{align}
E(y) – \mu &= 0 \\
E(w) – \mu &=0
\end{align}
but we cannot generally choose to make both of the sample analogs of these conditions true,
\begin{align}
m_1 &= \bar y – \hat\mu = 0 \\
m_2 &= \bar w – \hat\mu =0,
\end{align}
so the method of moments can’t be directly applied. But we can generalize (hence, GMM) and make these two moment conditions and as close to being true as we can in the sense that we can make the squared deviations as small as possible. We could choose to
$$ \textrm{min}_{\mu} (\bar y – \mu)^2 + (\bar w – \mu)^2,$$
minimizing this objective yields consistent (since and are each consistent) but usually inefficient estimates, since we should take into account that and might have different variances and might be correlated. Intuitively, if is much noisier than , we should place less weight on observations on because they contain less information about than observations on . Suppose for simplicity that these are independent samples and thus uncorrelated, but that the variance of is higher than the variance of . We can get rid of these unequal variances by dividing by the standard deviations (here and throughout the post we’ll assume for simplicity that we know all the variance parameters, abstracting from much of the complication of GMM estimation) and choose to
$$ \textrm{min}_{\mu} \label{eq:gmm}
\left ( \frac{\bar y – \mu}{\sigma_{\bar y}}\right)^2
+ \left ( \frac{\bar w – \mu}{\sigma_{\bar w}}\right )^2.$$
Note this is equivalent to weighting each moment condition the reciprocal of its standard deviation, so that we place more weight on the more precise condition. In general the moments will not be uncorrelated, and we should take that into account too.
We can form a test statistic against our theory that the means in these two samples are identical. Suppose we just the observations on , calculate , and then check to see how well that estimate explains the observations on the other variable ,
$$ \sum_i ( w_i – \bar y)^2. $$
If the two samples really have the same mean, then as the sample grows converges to , and this expression is the sum of zeromean squared normals, and we could base a test statistic on that result. Intuitively, if our theory is false, then the do not tend to zero mean random variables, and when we square them the results tend to be larger than if they did have zero mean.
That’s not the best way to test our theory, however. If our theory is true then the objective tends to the sum of two squared zeromean variables, but if our theory is incorrect the objective function tends to the sum of squared nonzero mean variables. A test can be based on this idea: if the realized value of the objective function when we minimize it would have to be very far out in the tail of the distribution obtained when the theory is correct (here, the distribution), we have evidence against our theory that and have the same mean. This is a simple example of Hansen’s test, or the J test. Note that if we only observe one of or but not both, we have zero degrees of freedom left over to test the assumptions of the model and we cannot conduct this test.
Linear regression.
The reasoning above can be applied to a very wide variety of problems, yielding various GMM estimators depending on which variables have zero mean under some theory. Consider a univariate linear regression model of the form
$$ y = \beta x + u, $$
where we interpret this equation as causal: is the causal effect of a oneunit change in on holding other causes of , , constant (and we assume that all variables have zero mean for simplicity). Suppose the data came from a randomized experiment on and that we have a random sample. Then and are uncorrelated, in other words, the random variables have mean zero,
$$ E [ x_i u_i ] = E[ x_i(y_i – \beta x_i) ] = 0, $$
the sample analog of which is
$$ \label{eq:sum} \frac{1}{n} \sum_i x_i(y_i – \beta x_i)=0.$$
Since we have one parameter and one equation, we can always make this condition true, and the solution is easily seen to be the OLS estimator, even if the errors are heteroskedastic or correlated. We cannot test our theory that and are uncorrelated, since we have one parameter and one equation to solve, so we can always make the sample analog of the moment condition true.
Instrumental variables.
Now suppose that did not come from a perfect randomized experiment, instead we have observational data and no reason to suppose that causes of we do observe () are uncorrelated with causes we don’t observe (). The condition no longer holds and an estimator based on that condition will have undesirable properties. But suppose we observe a variable which has the property that
$$E( z_{1i} u_i ) = E [ z_{1i} (y_i – \beta x_i) ] = 0,$$
that is, that should not covary with if is held fixed. The sample analog of this condition gives us the method of moments estimator of , which turns out to be the simple linear instrumental variables estimator, the ratio of the covariance between and to the covariance between and . We cannot test our theory that only affects because affects because, with one equation and one parameter, we can always find a value of to make the sample analog of this condition true.
Now suppose we have available a second instrument, , which theory tells us also affects only because affects .
The diagram illustrates the model. The variable , colored red to denote that we can’t observe this variable, confounds the relationship between and , implying that covariance between and does not reveal , the causal effect of on .
But we can estimate the covariances between and and between and , inspection of the diagram tells us these should be equal to the and . A oneunit increase in causes an unit increase in , and since a oneunit increase in causes a unit increase in , a oneunit increase in causes a change in . And likewise for : the diagram tells us that covariance between and can only occur if , and we can infer a value for from that covariance divided by the covariance between and .
So we have two distinct causal paths, either of which allows us to estimate the causal effect of on , just like in the introductory example we had two different ways of estimating the sample mean . GMM tells us how to optimally combine these two insights to produce the most precise single estimate of under the theory that and only affect because they affect , just like above GMM told us how to optimally combine two samples which have the same mean under some theory.
Theory tells us that both of these conditions are true
\begin{align}
E(z_{1i}u_i) &= E[ z_{1i}(y_i – \beta x_i) ] = 0 \\
E(z_{2i}u_i) &= E[ z_{2i}(y_i – \beta x_i) ] = 0 \\
\end{align}
but we cannot in general choose the single parameter to make both of the sample analogs of these conditions true. Suppose for simplicity that and are uncorrelated and that the are heteroskedastic but uncorrelated. Then the sample analogs of the theoretical moments above have sample counterparts
\begin{align}
m_1 =& \frac{1}{n} \sum_i [z_{1i}(y_i – \beta x_i)]\\
m_2 =& \frac{1}{n} \sum_i [z_{2i}(y_i – \beta x_i)]\\
\end{align}
and variances , for . Then selecting to
$$
\textrm{min}_{\beta} \left( \frac{m_1}{\sqrt{V(m_1)}}\right)^2 + \left( \frac{m_2}{\sqrt{V(m_2)}}\right)^2
$$
yields the GMM estimator of the causal effect of on . As in the introductory example, the moment conditions are weighted by the reciprocal of their standard deviations, since we want to put more weight on the more precise condition. More generally, we should also take into account that and (and sometimes the ) will generally be correlated.
Just like in the simple case above of estimating a mean from two samples, if our theory is true then the minimized value of the objective function is asymptotically distributed , so estimation by GMM produces a test of the theory as a byproduct of estimation (in this context, this test stat is also called the Sargan test). Intuitively, if and are both uncorrelated with , then we could form the simple IV estimator using just , and the residuals from that exercise should be uncorrelated (up to sampling noise) with . If they are not, then we’re not sure what’s wrong—either or could be correlated with —but we conclude that something is wrong with our theory. This is not the most efficient test, though. As in the introductory example, the value of the minimized objective function forms a test statistic against the null that our theory is correct.
Micro applications of GMM to infer causality.
In the preceding example, GMM allows us to estimate the causal effect of on using a theory that says that two variables and only affect because they affect . GMM tells us how to make use of our theory to make our estimates as precise as possible, and as a byproduct of estimation provides a test statistic against the null hypothesis that our theory is correct (warning: our theory could be also incorrect not because the ‘s affect for some other reason than through , but rather because the causal effect varies across units in the population, see e.g. Heckman, Urzua, and Vytlacil 2006).
GMM can be applied to much more complicated problems to estimate causal effects in a wide variety of nonlinear regression models (e.g., Windmeijer 2006), and to estimate the deep parameters in estimable choice models which can be used to produce outofsample predictions which sidestep the Lucas critique, e.g., Hotz and Miller (1993) or Ferrall (2012).
Tags: causation, econometrics, GMM

Evan

Evan

Kevin Denny

Hazem